Nov 4 20:04:36.037381 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 18:14:37 -00 2025 Nov 4 20:04:36.037411 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=61016dca95ee6fa66f021abf1ceeafeee7bf9965566c18dbe885e6a979e0df6f Nov 4 20:04:36.037426 kernel: BIOS-provided physical RAM map: Nov 4 20:04:36.037435 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 20:04:36.037444 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 20:04:36.037453 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 20:04:36.037464 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 4 20:04:36.037472 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 4 20:04:36.037479 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 20:04:36.037485 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 20:04:36.037494 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 20:04:36.037501 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 20:04:36.037508 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 20:04:36.037515 kernel: NX (Execute Disable) protection: active Nov 4 20:04:36.037523 kernel: APIC: Static calls initialized Nov 4 20:04:36.037532 kernel: SMBIOS 2.8 present. Nov 4 20:04:36.037540 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 4 20:04:36.037547 kernel: DMI: Memory slots populated: 1/1 Nov 4 20:04:36.037554 kernel: Hypervisor detected: KVM Nov 4 20:04:36.037561 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 20:04:36.037568 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 20:04:36.037576 kernel: kvm-clock: using sched offset of 3489745799 cycles Nov 4 20:04:36.037583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 20:04:36.037591 kernel: tsc: Detected 2794.748 MHz processor Nov 4 20:04:36.037601 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 20:04:36.037609 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 20:04:36.037617 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 20:04:36.037624 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 20:04:36.037632 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 20:04:36.037640 kernel: Using GB pages for direct mapping Nov 4 20:04:36.037648 kernel: ACPI: Early table checksum verification disabled Nov 4 20:04:36.037655 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 4 20:04:36.037665 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037673 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037681 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037688 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 4 20:04:36.037696 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037703 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037711 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037721 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 20:04:36.037731 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 4 20:04:36.037739 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 4 20:04:36.037747 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 4 20:04:36.037757 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 4 20:04:36.037765 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 4 20:04:36.037773 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 4 20:04:36.037781 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 4 20:04:36.037788 kernel: No NUMA configuration found Nov 4 20:04:36.037796 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 4 20:04:36.037804 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 4 20:04:36.037814 kernel: Zone ranges: Nov 4 20:04:36.037821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 20:04:36.037829 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 4 20:04:36.037837 kernel: Normal empty Nov 4 20:04:36.037845 kernel: Device empty Nov 4 20:04:36.037853 kernel: Movable zone start for each node Nov 4 20:04:36.037861 kernel: Early memory node ranges Nov 4 20:04:36.037869 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 20:04:36.037878 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 4 20:04:36.037886 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 4 20:04:36.037894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 20:04:36.037902 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 20:04:36.037910 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 20:04:36.037918 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 20:04:36.037926 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 20:04:36.037936 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 20:04:36.037944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 20:04:36.037951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 20:04:36.037959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 20:04:36.037967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 20:04:36.037975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 20:04:36.037983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 20:04:36.037991 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 20:04:36.038000 kernel: TSC deadline timer available Nov 4 20:04:36.038008 kernel: CPU topo: Max. logical packages: 1 Nov 4 20:04:36.038016 kernel: CPU topo: Max. logical dies: 1 Nov 4 20:04:36.038024 kernel: CPU topo: Max. dies per package: 1 Nov 4 20:04:36.038032 kernel: CPU topo: Max. threads per core: 1 Nov 4 20:04:36.038039 kernel: CPU topo: Num. cores per package: 4 Nov 4 20:04:36.038047 kernel: CPU topo: Num. threads per package: 4 Nov 4 20:04:36.038055 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 20:04:36.038065 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 20:04:36.038073 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 20:04:36.038081 kernel: kvm-guest: setup PV sched yield Nov 4 20:04:36.038088 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 20:04:36.038096 kernel: Booting paravirtualized kernel on KVM Nov 4 20:04:36.038104 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 20:04:36.038113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 20:04:36.038123 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 20:04:36.038131 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 20:04:36.038138 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 20:04:36.038146 kernel: kvm-guest: PV spinlocks enabled Nov 4 20:04:36.038154 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 20:04:36.038163 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=61016dca95ee6fa66f021abf1ceeafeee7bf9965566c18dbe885e6a979e0df6f Nov 4 20:04:36.038171 kernel: random: crng init done Nov 4 20:04:36.038181 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 20:04:36.038189 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 20:04:36.038204 kernel: Fallback order for Node 0: 0 Nov 4 20:04:36.038212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 4 20:04:36.038220 kernel: Policy zone: DMA32 Nov 4 20:04:36.038228 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 20:04:36.038237 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 20:04:36.038247 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 20:04:36.038255 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 20:04:36.038263 kernel: Dynamic Preempt: voluntary Nov 4 20:04:36.038271 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 20:04:36.038280 kernel: rcu: RCU event tracing is enabled. Nov 4 20:04:36.038288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 20:04:36.038296 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 20:04:36.038306 kernel: Rude variant of Tasks RCU enabled. Nov 4 20:04:36.038313 kernel: Tracing variant of Tasks RCU enabled. Nov 4 20:04:36.038322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 20:04:36.038329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 20:04:36.038337 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 20:04:36.038379 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 20:04:36.038391 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 20:04:36.038402 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 20:04:36.038417 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 20:04:36.038436 kernel: Console: colour VGA+ 80x25 Nov 4 20:04:36.038450 kernel: printk: legacy console [ttyS0] enabled Nov 4 20:04:36.038461 kernel: ACPI: Core revision 20240827 Nov 4 20:04:36.038492 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 20:04:36.038546 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 20:04:36.038559 kernel: x2apic enabled Nov 4 20:04:36.038570 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 20:04:36.038610 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 20:04:36.038654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 20:04:36.038666 kernel: kvm-guest: setup PV IPIs Nov 4 20:04:36.038677 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 20:04:36.038689 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 4 20:04:36.038703 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 4 20:04:36.038715 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 20:04:36.038726 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 20:04:36.038737 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 20:04:36.038748 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 20:04:36.038760 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 20:04:36.038771 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 20:04:36.038785 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 20:04:36.038796 kernel: active return thunk: retbleed_return_thunk Nov 4 20:04:36.038807 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 20:04:36.038839 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 20:04:36.038866 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 20:04:36.038878 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 20:04:36.038891 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 20:04:36.038906 kernel: active return thunk: srso_return_thunk Nov 4 20:04:36.038917 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 20:04:36.038928 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 20:04:36.038939 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 20:04:36.038951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 20:04:36.038962 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 20:04:36.038976 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 20:04:36.038987 kernel: Freeing SMP alternatives memory: 32K Nov 4 20:04:36.038998 kernel: pid_max: default: 32768 minimum: 301 Nov 4 20:04:36.039009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 20:04:36.039020 kernel: landlock: Up and running. Nov 4 20:04:36.039031 kernel: SELinux: Initializing. Nov 4 20:04:36.039043 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 20:04:36.039054 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 20:04:36.039069 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 20:04:36.039080 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 20:04:36.039091 kernel: ... version: 0 Nov 4 20:04:36.039102 kernel: ... bit width: 48 Nov 4 20:04:36.039113 kernel: ... generic registers: 6 Nov 4 20:04:36.039125 kernel: ... value mask: 0000ffffffffffff Nov 4 20:04:36.039136 kernel: ... max period: 00007fffffffffff Nov 4 20:04:36.039150 kernel: ... fixed-purpose events: 0 Nov 4 20:04:36.039161 kernel: ... event mask: 000000000000003f Nov 4 20:04:36.039172 kernel: signal: max sigframe size: 1776 Nov 4 20:04:36.039182 kernel: rcu: Hierarchical SRCU implementation. Nov 4 20:04:36.039194 kernel: rcu: Max phase no-delay instances is 400. Nov 4 20:04:36.039216 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 20:04:36.039228 kernel: smp: Bringing up secondary CPUs ... Nov 4 20:04:36.039242 kernel: smpboot: x86: Booting SMP configuration: Nov 4 20:04:36.039253 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 20:04:36.039264 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 20:04:36.039276 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 4 20:04:36.039287 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 118476K reserved, 0K cma-reserved) Nov 4 20:04:36.039298 kernel: devtmpfs: initialized Nov 4 20:04:36.039309 kernel: x86/mm: Memory block size: 128MB Nov 4 20:04:36.039323 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 20:04:36.039334 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 20:04:36.039364 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 20:04:36.039376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 20:04:36.039387 kernel: audit: initializing netlink subsys (disabled) Nov 4 20:04:36.039398 kernel: audit: type=2000 audit(1762286673.532:1): state=initialized audit_enabled=0 res=1 Nov 4 20:04:36.039409 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 20:04:36.039424 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 20:04:36.039434 kernel: cpuidle: using governor menu Nov 4 20:04:36.039445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 20:04:36.039456 kernel: dca service started, version 1.12.1 Nov 4 20:04:36.039467 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 20:04:36.039479 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 20:04:36.039490 kernel: PCI: Using configuration type 1 for base access Nov 4 20:04:36.039504 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 20:04:36.039515 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 20:04:36.039526 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 20:04:36.039537 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 20:04:36.039549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 20:04:36.039560 kernel: ACPI: Added _OSI(Module Device) Nov 4 20:04:36.039571 kernel: ACPI: Added _OSI(Processor Device) Nov 4 20:04:36.039585 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 20:04:36.039596 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 20:04:36.039606 kernel: ACPI: Interpreter enabled Nov 4 20:04:36.039617 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 20:04:36.039628 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 20:04:36.039640 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 20:04:36.039651 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 20:04:36.039662 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 20:04:36.039675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 20:04:36.039947 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 20:04:36.040173 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 20:04:36.040421 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 20:04:36.040438 kernel: PCI host bridge to bus 0000:00 Nov 4 20:04:36.040646 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 20:04:36.040837 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 20:04:36.041045 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 20:04:36.041243 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 4 20:04:36.041450 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 20:04:36.041636 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 20:04:36.041865 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 20:04:36.042087 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 20:04:36.042309 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 20:04:36.042534 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 20:04:36.042744 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 20:04:36.042914 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 20:04:36.043077 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 20:04:36.043267 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 20:04:36.043492 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 4 20:04:36.043706 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 20:04:36.043909 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 20:04:36.044122 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 20:04:36.044336 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 4 20:04:36.044560 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 20:04:36.044757 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 20:04:36.044962 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 20:04:36.045163 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 4 20:04:36.045386 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 4 20:04:36.045586 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 4 20:04:36.045783 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 20:04:36.046007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 20:04:36.046235 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 20:04:36.046501 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 20:04:36.046712 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 4 20:04:36.046920 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 4 20:04:36.047155 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 20:04:36.047417 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 20:04:36.047436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 20:04:36.047452 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 20:04:36.047464 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 20:04:36.047476 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 20:04:36.047487 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 20:04:36.047499 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 20:04:36.047510 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 20:04:36.047521 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 20:04:36.047536 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 20:04:36.047558 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 20:04:36.047570 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 20:04:36.047581 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 20:04:36.047590 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 20:04:36.047611 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 20:04:36.047623 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 20:04:36.047638 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 20:04:36.047657 kernel: iommu: Default domain type: Translated Nov 4 20:04:36.047683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 20:04:36.047695 kernel: PCI: Using ACPI for IRQ routing Nov 4 20:04:36.047707 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 20:04:36.047718 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 20:04:36.047729 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 4 20:04:36.047911 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 20:04:36.048076 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 20:04:36.048248 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 20:04:36.048260 kernel: vgaarb: loaded Nov 4 20:04:36.048268 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 20:04:36.048277 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 20:04:36.048286 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 20:04:36.048297 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 20:04:36.048305 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 20:04:36.048314 kernel: pnp: PnP ACPI init Nov 4 20:04:36.048515 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 20:04:36.048530 kernel: pnp: PnP ACPI: found 6 devices Nov 4 20:04:36.048539 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 20:04:36.048575 kernel: NET: Registered PF_INET protocol family Nov 4 20:04:36.048587 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 20:04:36.048600 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 20:04:36.048611 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 20:04:36.048623 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 20:04:36.048635 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 20:04:36.048647 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 20:04:36.048662 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 20:04:36.048674 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 20:04:36.048685 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 20:04:36.048697 kernel: NET: Registered PF_XDP protocol family Nov 4 20:04:36.048889 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 20:04:36.049077 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 20:04:36.049275 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 20:04:36.049485 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 4 20:04:36.049674 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 20:04:36.049864 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 20:04:36.049881 kernel: PCI: CLS 0 bytes, default 64 Nov 4 20:04:36.049893 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 4 20:04:36.049905 kernel: Initialise system trusted keyrings Nov 4 20:04:36.049916 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 20:04:36.049932 kernel: Key type asymmetric registered Nov 4 20:04:36.049943 kernel: Asymmetric key parser 'x509' registered Nov 4 20:04:36.049955 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 20:04:36.049967 kernel: io scheduler mq-deadline registered Nov 4 20:04:36.049978 kernel: io scheduler kyber registered Nov 4 20:04:36.049990 kernel: io scheduler bfq registered Nov 4 20:04:36.050002 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 20:04:36.050017 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 20:04:36.050029 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 20:04:36.050040 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 20:04:36.050052 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 20:04:36.050064 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 20:04:36.050076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 20:04:36.050088 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 20:04:36.050102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 20:04:36.050329 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 20:04:36.050373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 20:04:36.050568 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 20:04:36.050761 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T20:04:34 UTC (1762286674) Nov 4 20:04:36.050951 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 20:04:36.050971 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 20:04:36.050983 kernel: NET: Registered PF_INET6 protocol family Nov 4 20:04:36.050996 kernel: Segment Routing with IPv6 Nov 4 20:04:36.051007 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 20:04:36.051020 kernel: NET: Registered PF_PACKET protocol family Nov 4 20:04:36.051032 kernel: Key type dns_resolver registered Nov 4 20:04:36.051043 kernel: IPI shorthand broadcast: enabled Nov 4 20:04:36.051056 kernel: sched_clock: Marking stable (1820002180, 208112317)->(2078528442, -50413945) Nov 4 20:04:36.051070 kernel: registered taskstats version 1 Nov 4 20:04:36.051082 kernel: Loading compiled-in X.509 certificates Nov 4 20:04:36.051094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dc4ec9301c8f929f70e9cd51cfe5f36448d892c9' Nov 4 20:04:36.051106 kernel: Demotion targets for Node 0: null Nov 4 20:04:36.051118 kernel: Key type .fscrypt registered Nov 4 20:04:36.051130 kernel: Key type fscrypt-provisioning registered Nov 4 20:04:36.051142 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 20:04:36.051156 kernel: ima: Allocated hash algorithm: sha1 Nov 4 20:04:36.051168 kernel: ima: No architecture policies found Nov 4 20:04:36.051180 kernel: clk: Disabling unused clocks Nov 4 20:04:36.051192 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 20:04:36.051213 kernel: Write protecting the kernel read-only data: 45056k Nov 4 20:04:36.051225 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 20:04:36.051237 kernel: Run /init as init process Nov 4 20:04:36.051251 kernel: with arguments: Nov 4 20:04:36.051263 kernel: /init Nov 4 20:04:36.051275 kernel: with environment: Nov 4 20:04:36.051287 kernel: HOME=/ Nov 4 20:04:36.051298 kernel: TERM=linux Nov 4 20:04:36.051352 kernel: SCSI subsystem initialized Nov 4 20:04:36.051366 kernel: libata version 3.00 loaded. Nov 4 20:04:36.051617 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 20:04:36.051706 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 20:04:36.051924 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 20:04:36.052138 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 20:04:36.052434 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 20:04:36.052707 kernel: scsi host0: ahci Nov 4 20:04:36.052932 kernel: scsi host1: ahci Nov 4 20:04:36.053145 kernel: scsi host2: ahci Nov 4 20:04:36.053388 kernel: scsi host3: ahci Nov 4 20:04:36.053598 kernel: scsi host4: ahci Nov 4 20:04:36.053832 kernel: scsi host5: ahci Nov 4 20:04:36.053850 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 4 20:04:36.053859 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 4 20:04:36.053870 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 4 20:04:36.053879 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 4 20:04:36.053889 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 4 20:04:36.053898 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 4 20:04:36.053907 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 20:04:36.053917 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 20:04:36.053926 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 20:04:36.053934 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 20:04:36.053943 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 20:04:36.053951 kernel: ata3.00: applying bridge limits Nov 4 20:04:36.053960 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 20:04:36.053969 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 20:04:36.053980 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 20:04:36.053989 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 20:04:36.053997 kernel: ata3.00: configured for UDMA/100 Nov 4 20:04:36.054192 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 20:04:36.054482 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 20:04:36.055281 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 20:04:36.055304 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 20:04:36.055317 kernel: GPT:16515071 != 27000831 Nov 4 20:04:36.055329 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 20:04:36.055355 kernel: GPT:16515071 != 27000831 Nov 4 20:04:36.055368 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 20:04:36.055381 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 20:04:36.055615 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 20:04:36.055636 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 20:04:36.055997 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 20:04:36.056042 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 20:04:36.056076 kernel: device-mapper: uevent: version 1.0.3 Nov 4 20:04:36.056110 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 20:04:36.056141 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 20:04:36.056180 kernel: raid6: avx2x4 gen() 30467 MB/s Nov 4 20:04:36.056218 kernel: raid6: avx2x2 gen() 29629 MB/s Nov 4 20:04:36.056246 kernel: raid6: avx2x1 gen() 25911 MB/s Nov 4 20:04:36.056273 kernel: raid6: using algorithm avx2x4 gen() 30467 MB/s Nov 4 20:04:36.056303 kernel: raid6: .... xor() 7252 MB/s, rmw enabled Nov 4 20:04:36.056337 kernel: raid6: using avx2x2 recovery algorithm Nov 4 20:04:36.056376 kernel: xor: automatically using best checksumming function avx Nov 4 20:04:36.056389 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 20:04:36.056401 kernel: BTRFS: device fsid ae1eed3c-41f4-4e54-ad3c-8e1b98e43141 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Nov 4 20:04:36.056414 kernel: BTRFS info (device dm-0): first mount of filesystem ae1eed3c-41f4-4e54-ad3c-8e1b98e43141 Nov 4 20:04:36.056426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 20:04:36.056438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 20:04:36.056454 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 20:04:36.056466 kernel: loop: module loaded Nov 4 20:04:36.056478 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 20:04:36.056489 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 20:04:36.056503 systemd[1]: Successfully made /usr/ read-only. Nov 4 20:04:36.056519 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 20:04:36.056535 systemd[1]: Detected virtualization kvm. Nov 4 20:04:36.056548 systemd[1]: Detected architecture x86-64. Nov 4 20:04:36.056560 systemd[1]: Running in initrd. Nov 4 20:04:36.056573 systemd[1]: No hostname configured, using default hostname. Nov 4 20:04:36.056586 systemd[1]: Hostname set to . Nov 4 20:04:36.056599 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 20:04:36.056614 systemd[1]: Queued start job for default target initrd.target. Nov 4 20:04:36.056626 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 20:04:36.056638 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 20:04:36.056651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 20:04:36.056667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 20:04:36.056680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 20:04:36.056697 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 20:04:36.056711 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 20:04:36.056723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 20:04:36.056736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 20:04:36.056749 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 20:04:36.056761 systemd[1]: Reached target paths.target - Path Units. Nov 4 20:04:36.056774 systemd[1]: Reached target slices.target - Slice Units. Nov 4 20:04:36.056788 systemd[1]: Reached target swap.target - Swaps. Nov 4 20:04:36.056800 systemd[1]: Reached target timers.target - Timer Units. Nov 4 20:04:36.056814 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 20:04:36.056826 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 20:04:36.056839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 20:04:36.056852 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 20:04:36.056865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 20:04:36.056881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 20:04:36.056893 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 20:04:36.056907 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 20:04:36.056920 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 20:04:36.056933 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 20:04:36.056945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 20:04:36.056961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 20:04:36.056975 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 20:04:36.056988 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 20:04:36.057000 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 20:04:36.057013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 20:04:36.057026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 20:04:36.057042 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 20:04:36.057056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 20:04:36.057069 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 20:04:36.057081 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 20:04:36.057124 systemd-journald[319]: Collecting audit messages is disabled. Nov 4 20:04:36.057158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 20:04:36.057171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 20:04:36.057185 systemd-journald[319]: Journal started Nov 4 20:04:36.057222 systemd-journald[319]: Runtime Journal (/run/log/journal/08d79f9752dc4e7ba69d8086070a47b1) is 6M, max 48.2M, 42.2M free. Nov 4 20:04:36.059380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 20:04:36.065400 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 20:04:36.065434 kernel: Bridge firewalling registered Nov 4 20:04:36.066163 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 4 20:04:36.072503 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 20:04:36.139879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 20:04:36.143670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 20:04:36.150563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 20:04:36.154678 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 20:04:36.164896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 20:04:36.174816 systemd-tmpfiles[343]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 20:04:36.179777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 20:04:36.181912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 20:04:36.186108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 20:04:36.198546 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 20:04:36.203912 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 20:04:36.232293 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=61016dca95ee6fa66f021abf1ceeafeee7bf9965566c18dbe885e6a979e0df6f Nov 4 20:04:36.250309 systemd-resolved[353]: Positive Trust Anchors: Nov 4 20:04:36.250319 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 20:04:36.250323 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 20:04:36.250367 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 20:04:36.282674 systemd-resolved[353]: Defaulting to hostname 'linux'. Nov 4 20:04:36.283921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 20:04:36.286460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 20:04:36.342389 kernel: Loading iSCSI transport class v2.0-870. Nov 4 20:04:36.355374 kernel: iscsi: registered transport (tcp) Nov 4 20:04:36.377864 kernel: iscsi: registered transport (qla4xxx) Nov 4 20:04:36.377908 kernel: QLogic iSCSI HBA Driver Nov 4 20:04:36.402237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 20:04:36.427448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 20:04:36.428096 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 20:04:36.484109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 20:04:36.487210 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 20:04:36.489709 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 20:04:36.527467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 20:04:36.532628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 20:04:36.566519 systemd-udevd[596]: Using default interface naming scheme 'v257'. Nov 4 20:04:36.582816 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 20:04:36.584448 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 20:04:36.610030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 20:04:36.611920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 20:04:36.622450 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Nov 4 20:04:36.649248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 20:04:36.654266 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 20:04:36.668257 systemd-networkd[702]: lo: Link UP Nov 4 20:04:36.668263 systemd-networkd[702]: lo: Gained carrier Nov 4 20:04:36.668772 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 20:04:36.671288 systemd[1]: Reached target network.target - Network. Nov 4 20:04:36.747886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 20:04:36.749120 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 20:04:36.808611 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 20:04:36.827330 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 20:04:36.840950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 20:04:36.849360 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 20:04:36.855122 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 20:04:36.862474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 20:04:36.870388 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 20:04:36.874309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 20:04:36.882167 kernel: AES CTR mode by8 optimization enabled Nov 4 20:04:36.874442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 20:04:36.876490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 20:04:36.880391 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 20:04:36.880396 systemd-networkd[702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 20:04:36.881098 systemd-networkd[702]: eth0: Link UP Nov 4 20:04:36.883501 systemd-networkd[702]: eth0: Gained carrier Nov 4 20:04:36.908292 disk-uuid[832]: Primary Header is updated. Nov 4 20:04:36.908292 disk-uuid[832]: Secondary Entries is updated. Nov 4 20:04:36.908292 disk-uuid[832]: Secondary Header is updated. Nov 4 20:04:36.883512 systemd-networkd[702]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 20:04:36.889713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 20:04:36.905747 systemd-networkd[702]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 20:04:36.958591 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 20:04:37.009468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 20:04:37.023918 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 20:04:37.025948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 20:04:37.029650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 20:04:37.034010 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 20:04:37.059861 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 20:04:37.959723 disk-uuid[837]: Warning: The kernel is still using the old partition table. Nov 4 20:04:37.959723 disk-uuid[837]: The new table will be used at the next reboot or after you Nov 4 20:04:37.959723 disk-uuid[837]: run partprobe(8) or kpartx(8) Nov 4 20:04:37.959723 disk-uuid[837]: The operation has completed successfully. Nov 4 20:04:37.968730 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 20:04:37.968886 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 20:04:37.973609 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 20:04:38.016818 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 4 20:04:38.016869 kernel: BTRFS info (device vda6): first mount of filesystem b54e8166-e4e1-4a26-aac3-bcd5f2d1d50c Nov 4 20:04:38.016896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 20:04:38.021975 kernel: BTRFS info (device vda6): turning on async discard Nov 4 20:04:38.022039 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 20:04:38.029370 kernel: BTRFS info (device vda6): last unmount of filesystem b54e8166-e4e1-4a26-aac3-bcd5f2d1d50c Nov 4 20:04:38.030545 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 20:04:38.031953 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 20:04:38.140539 ignition[884]: Ignition 2.22.0 Nov 4 20:04:38.140552 ignition[884]: Stage: fetch-offline Nov 4 20:04:38.140591 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:38.140603 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:38.140690 ignition[884]: parsed url from cmdline: "" Nov 4 20:04:38.140693 ignition[884]: no config URL provided Nov 4 20:04:38.140698 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 20:04:38.140710 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 4 20:04:38.140755 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 4 20:04:38.140759 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 20:04:38.149685 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 4 20:04:38.241846 ignition[884]: parsing config with SHA512: d207ed8f70be354d74bd4b4a19cca2d1bf39b3e7ba7e9de388d35efd033d44fbb09464d564905d59c9d2fa0c0f45bf48a8a8aace62d673a46b526ab6b94a99b6 Nov 4 20:04:38.247027 unknown[884]: fetched base config from "system" Nov 4 20:04:38.247040 unknown[884]: fetched user config from "qemu" Nov 4 20:04:38.247401 ignition[884]: fetch-offline: fetch-offline passed Nov 4 20:04:38.250330 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 20:04:38.247454 ignition[884]: Ignition finished successfully Nov 4 20:04:38.251256 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 20:04:38.252172 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 20:04:38.297585 ignition[894]: Ignition 2.22.0 Nov 4 20:04:38.297598 ignition[894]: Stage: kargs Nov 4 20:04:38.297736 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:38.297746 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:38.298444 ignition[894]: kargs: kargs passed Nov 4 20:04:38.298486 ignition[894]: Ignition finished successfully Nov 4 20:04:38.304965 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 20:04:38.306644 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 20:04:38.338849 ignition[902]: Ignition 2.22.0 Nov 4 20:04:38.338862 ignition[902]: Stage: disks Nov 4 20:04:38.338992 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:38.339002 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:38.339728 ignition[902]: disks: disks passed Nov 4 20:04:38.343426 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 20:04:38.339773 ignition[902]: Ignition finished successfully Nov 4 20:04:38.345686 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 20:04:38.350378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 20:04:38.353508 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 20:04:38.353865 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 20:04:38.361897 systemd[1]: Reached target basic.target - Basic System. Nov 4 20:04:38.367606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 20:04:38.404363 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 20:04:38.412305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 20:04:38.413547 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 20:04:38.567370 kernel: EXT4-fs (vda9): mounted filesystem 0ed89e66-8209-49a1-9262-79487ccff3ea r/w with ordered data mode. Quota mode: none. Nov 4 20:04:38.567969 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 20:04:38.571164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 20:04:38.572728 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 20:04:38.576044 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 20:04:38.580746 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 20:04:38.580808 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 20:04:38.580845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 20:04:38.608583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 20:04:38.619796 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Nov 4 20:04:38.619825 kernel: BTRFS info (device vda6): first mount of filesystem b54e8166-e4e1-4a26-aac3-bcd5f2d1d50c Nov 4 20:04:38.619837 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 20:04:38.619848 kernel: BTRFS info (device vda6): turning on async discard Nov 4 20:04:38.619860 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 20:04:38.610191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 20:04:38.625170 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 20:04:38.673067 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 20:04:38.677894 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 4 20:04:38.682336 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 20:04:38.687692 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 20:04:38.784994 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 20:04:38.788326 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 20:04:38.790854 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 20:04:38.811497 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 20:04:38.814044 kernel: BTRFS info (device vda6): last unmount of filesystem b54e8166-e4e1-4a26-aac3-bcd5f2d1d50c Nov 4 20:04:38.833565 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 20:04:38.855338 ignition[1034]: INFO : Ignition 2.22.0 Nov 4 20:04:38.855338 ignition[1034]: INFO : Stage: mount Nov 4 20:04:38.857958 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:38.857958 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:38.861804 ignition[1034]: INFO : mount: mount passed Nov 4 20:04:38.863049 ignition[1034]: INFO : Ignition finished successfully Nov 4 20:04:38.867134 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 20:04:38.871520 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 20:04:38.901586 systemd-networkd[702]: eth0: Gained IPv6LL Nov 4 20:04:39.569903 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 20:04:39.608374 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Nov 4 20:04:39.611886 kernel: BTRFS info (device vda6): first mount of filesystem b54e8166-e4e1-4a26-aac3-bcd5f2d1d50c Nov 4 20:04:39.611929 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 20:04:39.615935 kernel: BTRFS info (device vda6): turning on async discard Nov 4 20:04:39.616027 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 20:04:39.618300 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 20:04:39.723576 ignition[1063]: INFO : Ignition 2.22.0 Nov 4 20:04:39.723576 ignition[1063]: INFO : Stage: files Nov 4 20:04:39.726495 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:39.726495 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:39.726495 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 4 20:04:39.726495 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 20:04:39.726495 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 20:04:39.736902 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 20:04:39.736902 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 20:04:39.736902 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 20:04:39.736902 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 20:04:39.736902 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 20:04:39.729768 unknown[1063]: wrote ssh authorized keys file for user: core Nov 4 20:04:39.922042 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 20:04:40.007506 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 20:04:40.011432 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 20:04:40.037561 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 20:04:40.507644 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 20:04:41.226600 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 20:04:41.226600 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 20:04:41.232728 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 20:04:41.236136 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 20:04:41.258159 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 20:04:41.264788 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 20:04:41.267424 ignition[1063]: INFO : files: files passed Nov 4 20:04:41.267424 ignition[1063]: INFO : Ignition finished successfully Nov 4 20:04:41.279256 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 20:04:41.281672 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 20:04:41.288010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 20:04:41.301754 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 20:04:41.301891 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 20:04:41.308909 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 20:04:41.311760 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 20:04:41.311760 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 20:04:41.317026 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 20:04:41.318170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 20:04:41.322497 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 20:04:41.326655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 20:04:41.396030 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 20:04:41.396166 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 20:04:41.397906 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 20:04:41.398274 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 20:04:41.404948 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 20:04:41.409954 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 20:04:41.436142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 20:04:41.440109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 20:04:41.464826 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 20:04:41.464955 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 20:04:41.466943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 20:04:41.470508 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 20:04:41.474054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 20:04:41.474164 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 20:04:41.480078 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 20:04:41.481944 systemd[1]: Stopped target basic.target - Basic System. Nov 4 20:04:41.482757 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 20:04:41.487697 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 20:04:41.491024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 20:04:41.491906 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 20:04:41.497891 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 20:04:41.501281 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 20:04:41.504420 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 20:04:41.508172 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 20:04:41.511288 systemd[1]: Stopped target swap.target - Swaps. Nov 4 20:04:41.515492 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 20:04:41.515607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 20:04:41.520015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 20:04:41.521674 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 20:04:41.522166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 20:04:41.528370 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 20:04:41.528726 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 20:04:41.528833 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 20:04:41.536765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 20:04:41.536879 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 20:04:41.538794 systemd[1]: Stopped target paths.target - Path Units. Nov 4 20:04:41.539240 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 20:04:41.548424 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 20:04:41.548589 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 20:04:41.552765 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 20:04:41.553304 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 20:04:41.553412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 20:04:41.558202 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 20:04:41.558295 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 20:04:41.562522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 20:04:41.562633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 20:04:41.564025 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 20:04:41.564131 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 20:04:41.576894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 20:04:41.578438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 20:04:41.578587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 20:04:41.595132 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 20:04:41.596578 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 20:04:41.596724 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 20:04:41.603664 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 20:04:41.603831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 20:04:41.607439 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 20:04:41.607591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 20:04:41.614487 ignition[1121]: INFO : Ignition 2.22.0 Nov 4 20:04:41.614487 ignition[1121]: INFO : Stage: umount Nov 4 20:04:41.614487 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 20:04:41.614487 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 20:04:41.614487 ignition[1121]: INFO : umount: umount passed Nov 4 20:04:41.614487 ignition[1121]: INFO : Ignition finished successfully Nov 4 20:04:41.616793 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 20:04:41.616911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 20:04:41.619443 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 20:04:41.619550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 20:04:41.625263 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 20:04:41.625749 systemd[1]: Stopped target network.target - Network. Nov 4 20:04:41.627341 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 20:04:41.627411 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 20:04:41.630526 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 20:04:41.630593 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 20:04:41.633630 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 20:04:41.633698 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 20:04:41.635626 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 20:04:41.635694 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 20:04:41.641987 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 20:04:41.643322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 20:04:41.655897 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 20:04:41.656024 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 20:04:41.662407 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 20:04:41.665764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 20:04:41.665824 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 20:04:41.672846 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 20:04:41.675896 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 20:04:41.675959 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 20:04:41.678028 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 20:04:41.682608 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 20:04:41.682725 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 20:04:41.692423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 20:04:41.692507 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 20:04:41.696562 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 20:04:41.696613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 20:04:41.700963 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 20:04:41.701115 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 20:04:41.703237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 20:04:41.703291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 20:04:41.712327 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 20:04:41.712531 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 20:04:41.720018 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 20:04:41.720097 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 20:04:41.725459 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 20:04:41.725504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 20:04:41.730969 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 20:04:41.731037 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 20:04:41.737414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 20:04:41.737468 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 20:04:41.742037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 20:04:41.742093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 20:04:41.749425 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 20:04:41.751835 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 20:04:41.751900 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 20:04:41.753995 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 20:04:41.754048 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 20:04:41.754816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 20:04:41.754861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 20:04:41.772745 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 20:04:41.772865 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 20:04:41.774633 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 20:04:41.774734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 20:04:41.781283 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 20:04:41.783703 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 20:04:41.812672 systemd[1]: Switching root. Nov 4 20:04:41.856474 systemd-journald[319]: Journal stopped Nov 4 20:04:43.065923 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Nov 4 20:04:43.066000 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 20:04:43.066019 kernel: SELinux: policy capability open_perms=1 Nov 4 20:04:43.066031 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 20:04:43.066044 kernel: SELinux: policy capability always_check_network=0 Nov 4 20:04:43.066059 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 20:04:43.066072 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 20:04:43.066084 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 20:04:43.066095 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 20:04:43.066107 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 20:04:43.066124 kernel: audit: type=1403 audit(1762286682.203:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 20:04:43.066137 systemd[1]: Successfully loaded SELinux policy in 69.653ms. Nov 4 20:04:43.066158 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.352ms. Nov 4 20:04:43.066172 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 20:04:43.066185 systemd[1]: Detected virtualization kvm. Nov 4 20:04:43.066198 systemd[1]: Detected architecture x86-64. Nov 4 20:04:43.066211 systemd[1]: Detected first boot. Nov 4 20:04:43.066223 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 20:04:43.066247 zram_generator::config[1165]: No configuration found. Nov 4 20:04:43.066263 kernel: Guest personality initialized and is inactive Nov 4 20:04:43.066275 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 20:04:43.066286 kernel: Initialized host personality Nov 4 20:04:43.066298 kernel: NET: Registered PF_VSOCK protocol family Nov 4 20:04:43.066310 systemd[1]: Populated /etc with preset unit settings. Nov 4 20:04:43.066323 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 20:04:43.066353 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 20:04:43.066370 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 20:04:43.066383 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 20:04:43.066397 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 20:04:43.066409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 20:04:43.066422 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 20:04:43.066435 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 20:04:43.066448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 20:04:43.066463 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 20:04:43.066476 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 20:04:43.066489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 20:04:43.066502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 20:04:43.066515 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 20:04:43.066529 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 20:04:43.066542 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 20:04:43.066558 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 20:04:43.066570 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 20:04:43.066583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 20:04:43.066596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 20:04:43.066608 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 20:04:43.066621 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 20:04:43.066636 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 20:04:43.066649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 20:04:43.066661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 20:04:43.066674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 20:04:43.066687 systemd[1]: Reached target slices.target - Slice Units. Nov 4 20:04:43.066700 systemd[1]: Reached target swap.target - Swaps. Nov 4 20:04:43.066713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 20:04:43.066728 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 20:04:43.066741 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 20:04:43.066754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 20:04:43.066766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 20:04:43.066779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 20:04:43.066793 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 20:04:43.066806 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 20:04:43.066821 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 20:04:43.066833 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 20:04:43.066846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.066859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 20:04:43.066872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 20:04:43.066885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 20:04:43.066898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 20:04:43.066913 systemd[1]: Reached target machines.target - Containers. Nov 4 20:04:43.066925 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 20:04:43.066938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 20:04:43.066951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 20:04:43.066964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 20:04:43.066977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 20:04:43.066989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 20:04:43.067004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 20:04:43.067017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 20:04:43.067029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 20:04:43.067042 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 20:04:43.067055 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 20:04:43.067067 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 20:04:43.067083 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 20:04:43.067096 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 20:04:43.067109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 20:04:43.067122 kernel: fuse: init (API version 7.41) Nov 4 20:04:43.067134 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 20:04:43.067148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 20:04:43.067161 kernel: ACPI: bus type drm_connector registered Nov 4 20:04:43.067175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 20:04:43.067188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 20:04:43.067201 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 20:04:43.067214 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 20:04:43.067237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.067250 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 20:04:43.067263 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 20:04:43.067276 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 20:04:43.067304 systemd-journald[1250]: Collecting audit messages is disabled. Nov 4 20:04:43.067328 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 20:04:43.067423 systemd-journald[1250]: Journal started Nov 4 20:04:43.067446 systemd-journald[1250]: Runtime Journal (/run/log/journal/08d79f9752dc4e7ba69d8086070a47b1) is 6M, max 48.2M, 42.2M free. Nov 4 20:04:42.747573 systemd[1]: Queued start job for default target multi-user.target. Nov 4 20:04:42.769144 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 20:04:42.769677 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 20:04:43.069393 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 20:04:43.072108 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 20:04:43.074013 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 20:04:43.075921 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 20:04:43.078260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 20:04:43.080581 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 20:04:43.080796 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 20:04:43.083015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 20:04:43.083242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 20:04:43.085416 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 20:04:43.085623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 20:04:43.087785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 20:04:43.087994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 20:04:43.090278 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 20:04:43.090509 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 20:04:43.092704 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 20:04:43.092938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 20:04:43.095117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 20:04:43.097419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 20:04:43.100804 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 20:04:43.103289 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 20:04:43.118805 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 20:04:43.121239 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 20:04:43.124557 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 20:04:43.127384 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 20:04:43.129176 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 20:04:43.129205 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 20:04:43.131797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 20:04:43.133969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 20:04:43.140313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 20:04:43.143541 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 20:04:43.145478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 20:04:43.148244 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 20:04:43.150438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 20:04:43.151636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 20:04:43.157475 systemd-journald[1250]: Time spent on flushing to /var/log/journal/08d79f9752dc4e7ba69d8086070a47b1 is 26.424ms for 962 entries. Nov 4 20:04:43.157475 systemd-journald[1250]: System Journal (/var/log/journal/08d79f9752dc4e7ba69d8086070a47b1) is 8M, max 163.5M, 155.5M free. Nov 4 20:04:43.198871 systemd-journald[1250]: Received client request to flush runtime journal. Nov 4 20:04:43.198964 kernel: loop1: detected capacity change from 0 to 229808 Nov 4 20:04:43.156819 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 20:04:43.162017 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 20:04:43.165029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 20:04:43.168155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 20:04:43.170358 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 20:04:43.179097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 20:04:43.183802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 20:04:43.186482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 20:04:43.190017 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 20:04:43.207584 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 20:04:43.210787 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 20:04:43.215511 kernel: loop2: detected capacity change from 0 to 119080 Nov 4 20:04:43.216088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 20:04:43.219521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 20:04:43.235479 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 20:04:43.237813 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 20:04:43.251676 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 4 20:04:43.251694 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Nov 4 20:04:43.256278 kernel: loop3: detected capacity change from 0 to 111544 Nov 4 20:04:43.256388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 20:04:43.286373 kernel: loop4: detected capacity change from 0 to 229808 Nov 4 20:04:43.288165 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 20:04:43.297374 kernel: loop5: detected capacity change from 0 to 119080 Nov 4 20:04:43.305371 kernel: loop6: detected capacity change from 0 to 111544 Nov 4 20:04:43.314595 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 20:04:43.318536 (sd-merge)[1308]: Merged extensions into '/usr'. Nov 4 20:04:43.324590 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 20:04:43.324611 systemd[1]: Reloading... Nov 4 20:04:43.356403 systemd-resolved[1299]: Positive Trust Anchors: Nov 4 20:04:43.356757 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 20:04:43.356815 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 20:04:43.356883 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 20:04:43.362779 systemd-resolved[1299]: Defaulting to hostname 'linux'. Nov 4 20:04:43.382372 zram_generator::config[1342]: No configuration found. Nov 4 20:04:43.569484 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 20:04:43.569968 systemd[1]: Reloading finished in 244 ms. Nov 4 20:04:43.604819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 20:04:43.607104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 20:04:43.611815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 20:04:43.637731 systemd[1]: Starting ensure-sysext.service... Nov 4 20:04:43.640045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 20:04:43.656612 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Nov 4 20:04:43.656715 systemd[1]: Reloading... Nov 4 20:04:43.657583 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 20:04:43.657620 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 20:04:43.657921 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 20:04:43.658168 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 20:04:43.659104 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 20:04:43.659394 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 4 20:04:43.659466 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 4 20:04:43.665195 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 20:04:43.665205 systemd-tmpfiles[1379]: Skipping /boot Nov 4 20:04:43.675413 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 20:04:43.675426 systemd-tmpfiles[1379]: Skipping /boot Nov 4 20:04:43.704389 zram_generator::config[1409]: No configuration found. Nov 4 20:04:43.884884 systemd[1]: Reloading finished in 227 ms. Nov 4 20:04:43.904891 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 20:04:43.924336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 20:04:43.934928 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 20:04:43.937530 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 20:04:43.940523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 20:04:43.952814 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 20:04:43.957183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 20:04:43.961040 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 20:04:43.965584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.965744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 20:04:43.972423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 20:04:43.978241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 20:04:43.981646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 20:04:43.983621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 20:04:43.983721 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 20:04:43.983810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.987436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.987595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 20:04:43.987743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 20:04:43.987821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 20:04:43.987901 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.994470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:43.994726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 20:04:43.997244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 20:04:43.999177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 20:04:43.999452 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 20:04:43.999697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 20:04:44.004060 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 20:04:44.005737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 20:04:44.008200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 20:04:44.008619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 20:04:44.013777 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Nov 4 20:04:44.018179 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 20:04:44.021409 systemd[1]: Finished ensure-sysext.service. Nov 4 20:04:44.026160 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 20:04:44.026402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 20:04:44.028923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 20:04:44.029147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 20:04:44.034871 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 20:04:44.045735 augenrules[1482]: No rules Nov 4 20:04:44.049623 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 20:04:44.050025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 20:04:44.052717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 20:04:44.052819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 20:04:44.054954 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 20:04:44.063616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 20:04:44.068097 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 20:04:44.078501 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 20:04:44.082281 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 20:04:44.159542 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 20:04:44.178532 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 20:04:44.186541 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 20:04:44.195505 systemd-networkd[1495]: lo: Link UP Nov 4 20:04:44.195755 systemd-networkd[1495]: lo: Gained carrier Nov 4 20:04:44.196910 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 20:04:44.201690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 20:04:44.203872 systemd[1]: Reached target network.target - Network. Nov 4 20:04:44.208921 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 20:04:44.217468 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 20:04:44.223840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 20:04:44.236615 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 20:04:44.242824 systemd-networkd[1495]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 20:04:44.242838 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 20:04:44.246380 kernel: ACPI: button: Power Button [PWRF] Nov 4 20:04:44.249690 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 20:04:44.251100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 20:04:44.253601 systemd-networkd[1495]: eth0: Link UP Nov 4 20:04:44.254613 systemd-networkd[1495]: eth0: Gained carrier Nov 4 20:04:44.254767 systemd-networkd[1495]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 20:04:44.268671 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 20:04:44.269273 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 20:04:44.273515 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 20:04:44.277056 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 20:04:44.281782 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Nov 4 20:04:45.256701 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 20:04:45.256849 systemd-timesyncd[1490]: Initial clock synchronization to Tue 2025-11-04 20:04:45.256517 UTC. Nov 4 20:04:45.257719 systemd-resolved[1299]: Clock change detected. Flushing caches. Nov 4 20:04:45.412342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 20:04:45.453855 kernel: kvm_amd: TSC scaling supported Nov 4 20:04:45.455596 kernel: kvm_amd: Nested Virtualization enabled Nov 4 20:04:45.455636 kernel: kvm_amd: Nested Paging enabled Nov 4 20:04:45.455652 kernel: kvm_amd: LBR virtualization supported Nov 4 20:04:45.455668 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 20:04:45.455912 kernel: kvm_amd: Virtual GIF supported Nov 4 20:04:45.492041 kernel: EDAC MC: Ver: 3.0.0 Nov 4 20:04:45.627818 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 20:04:45.635289 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 20:04:45.667820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 20:04:45.673532 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 20:04:45.702793 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 20:04:45.704982 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 20:04:45.707214 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 20:04:45.709402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 20:04:45.711522 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 20:04:45.713793 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 20:04:45.715753 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 20:04:45.717872 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 20:04:45.720062 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 20:04:45.720094 systemd[1]: Reached target paths.target - Path Units. Nov 4 20:04:45.721665 systemd[1]: Reached target timers.target - Timer Units. Nov 4 20:04:45.724212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 20:04:45.727797 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 20:04:45.731394 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 20:04:45.733605 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 20:04:45.735651 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 20:04:45.742662 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 20:04:45.744983 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 20:04:45.748074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 20:04:45.751097 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 20:04:45.752836 systemd[1]: Reached target basic.target - Basic System. Nov 4 20:04:45.754555 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 20:04:45.754594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 20:04:45.756233 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 20:04:45.759456 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 20:04:45.762454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 20:04:45.767332 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 20:04:45.775993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 20:04:45.777805 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 20:04:45.779334 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 20:04:45.781816 jq[1563]: false Nov 4 20:04:45.782331 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 20:04:45.786670 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 20:04:45.790109 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 20:04:45.791170 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Nov 4 20:04:45.791182 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Nov 4 20:04:45.794242 extend-filesystems[1564]: Found /dev/vda6 Nov 4 20:04:45.796424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 20:04:45.798855 extend-filesystems[1564]: Found /dev/vda9 Nov 4 20:04:45.802076 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 20:04:45.802914 extend-filesystems[1564]: Checking size of /dev/vda9 Nov 4 20:04:45.804814 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 20:04:45.803941 oslogin_cache_refresh[1565]: Failure getting users, quitting Nov 4 20:04:45.806579 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Nov 4 20:04:45.806579 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 20:04:45.806579 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Nov 4 20:04:45.805585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 20:04:45.803963 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 20:04:45.806545 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 20:04:45.804030 oslogin_cache_refresh[1565]: Refreshing group entry cache Nov 4 20:04:45.811749 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Nov 4 20:04:45.811749 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 20:04:45.810640 oslogin_cache_refresh[1565]: Failure getting groups, quitting Nov 4 20:04:45.810650 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 20:04:45.815743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 20:04:45.821915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 20:04:45.825119 extend-filesystems[1564]: Resized partition /dev/vda9 Nov 4 20:04:45.824563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 20:04:45.825179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 20:04:45.825511 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 20:04:45.825746 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 20:04:45.829910 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 20:04:45.831173 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 20:04:45.837263 extend-filesystems[1588]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 20:04:45.847385 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 20:04:45.849483 update_engine[1579]: I20251104 20:04:45.849402 1579 main.cc:92] Flatcar Update Engine starting Nov 4 20:04:45.851039 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 20:04:45.851503 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 20:04:45.855096 jq[1580]: true Nov 4 20:04:45.880229 tar[1591]: linux-amd64/LICENSE Nov 4 20:04:45.881132 tar[1591]: linux-amd64/helm Nov 4 20:04:45.889359 dbus-daemon[1561]: [system] SELinux support is enabled Nov 4 20:04:45.889614 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 20:04:45.893041 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 20:04:45.921267 update_engine[1579]: I20251104 20:04:45.898641 1579 update_check_scheduler.cc:74] Next update check in 8m47s Nov 4 20:04:45.896776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 20:04:45.921395 jq[1606]: true Nov 4 20:04:45.896808 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 20:04:45.899125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 20:04:45.899143 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 20:04:45.902701 systemd[1]: Started update-engine.service - Update Engine. Nov 4 20:04:45.914431 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 20:04:45.924557 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 20:04:45.924557 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 20:04:45.924557 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 20:04:45.932830 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Nov 4 20:04:45.928619 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 20:04:45.929046 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 20:04:45.945729 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 20:04:45.945760 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 20:04:45.946404 systemd-logind[1575]: New seat seat0. Nov 4 20:04:45.952358 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 20:04:45.997867 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 20:04:46.002840 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Nov 4 20:04:46.006060 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 20:04:46.013191 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 20:04:46.290049 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 20:04:46.334800 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 20:04:46.338573 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 20:04:46.357837 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 20:04:46.358136 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 20:04:46.361488 containerd[1607]: time="2025-11-04T20:04:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 20:04:46.362202 containerd[1607]: time="2025-11-04T20:04:46.362160920Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 20:04:46.362252 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379389260Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.841µs" Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379422943Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379464551Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379477545Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379693861Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379709661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379775905Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380030 containerd[1607]: time="2025-11-04T20:04:46.379786735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380254 containerd[1607]: time="2025-11-04T20:04:46.380234765Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380310 containerd[1607]: time="2025-11-04T20:04:46.380296541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380357 containerd[1607]: time="2025-11-04T20:04:46.380345082Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380398 containerd[1607]: time="2025-11-04T20:04:46.380387432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380651 containerd[1607]: time="2025-11-04T20:04:46.380632311Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380706 containerd[1607]: time="2025-11-04T20:04:46.380694297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 20:04:46.380854 containerd[1607]: time="2025-11-04T20:04:46.380838808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.381225 containerd[1607]: time="2025-11-04T20:04:46.381205466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.381307 containerd[1607]: time="2025-11-04T20:04:46.381293100Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 20:04:46.381353 containerd[1607]: time="2025-11-04T20:04:46.381341761Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 20:04:46.381428 containerd[1607]: time="2025-11-04T20:04:46.381415379Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 20:04:46.382182 containerd[1607]: time="2025-11-04T20:04:46.382132314Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 20:04:46.382328 containerd[1607]: time="2025-11-04T20:04:46.382308655Z" level=info msg="metadata content store policy set" policy=shared Nov 4 20:04:46.391081 containerd[1607]: time="2025-11-04T20:04:46.391046481Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 20:04:46.391132 containerd[1607]: time="2025-11-04T20:04:46.391105822Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 20:04:46.391247 containerd[1607]: time="2025-11-04T20:04:46.391223403Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 20:04:46.391247 containerd[1607]: time="2025-11-04T20:04:46.391239623Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 20:04:46.391294 containerd[1607]: time="2025-11-04T20:04:46.391254040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 20:04:46.391294 containerd[1607]: time="2025-11-04T20:04:46.391268037Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 20:04:46.391294 containerd[1607]: time="2025-11-04T20:04:46.391280209Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 20:04:46.391294 containerd[1607]: time="2025-11-04T20:04:46.391290699Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391301950Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391314093Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391325324Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391335904Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391345041Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 20:04:46.391368 containerd[1607]: time="2025-11-04T20:04:46.391356863Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 20:04:46.391603 containerd[1607]: time="2025-11-04T20:04:46.391573239Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 20:04:46.391603 containerd[1607]: time="2025-11-04T20:04:46.391598256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 20:04:46.391643 containerd[1607]: time="2025-11-04T20:04:46.391615929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 20:04:46.391643 containerd[1607]: time="2025-11-04T20:04:46.391633171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 20:04:46.391689 containerd[1607]: time="2025-11-04T20:04:46.391644613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 20:04:46.391689 containerd[1607]: time="2025-11-04T20:04:46.391664560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 20:04:46.391689 containerd[1607]: time="2025-11-04T20:04:46.391676292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 20:04:46.391689 containerd[1607]: time="2025-11-04T20:04:46.391688896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 20:04:46.391758 containerd[1607]: time="2025-11-04T20:04:46.391701890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 20:04:46.391758 containerd[1607]: time="2025-11-04T20:04:46.391711909Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 20:04:46.391758 containerd[1607]: time="2025-11-04T20:04:46.391720996Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 20:04:46.391758 containerd[1607]: time="2025-11-04T20:04:46.391744220Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 20:04:46.391829 containerd[1607]: time="2025-11-04T20:04:46.391793542Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 20:04:46.391829 containerd[1607]: time="2025-11-04T20:04:46.391805605Z" level=info msg="Start snapshots syncer" Nov 4 20:04:46.391866 containerd[1607]: time="2025-11-04T20:04:46.391836182Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 20:04:46.392297 containerd[1607]: time="2025-11-04T20:04:46.392188573Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 20:04:46.392297 containerd[1607]: time="2025-11-04T20:04:46.392264335Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 20:04:46.392466 containerd[1607]: time="2025-11-04T20:04:46.392379130Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 20:04:46.392488 containerd[1607]: time="2025-11-04T20:04:46.392478016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392507411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392524814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392536205Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392553808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392564548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392574837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392585087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392596047Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 20:04:46.392636 containerd[1607]: time="2025-11-04T20:04:46.392632155Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392643737Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392652263Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392661209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392669395Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392679433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392688430Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392719068Z" level=info msg="runtime interface created" Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392725179Z" level=info msg="created NRI interface" Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392733365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392753933Z" level=info msg="Connect containerd service" Nov 4 20:04:46.392943 containerd[1607]: time="2025-11-04T20:04:46.392774121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 20:04:46.394617 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 20:04:46.395962 containerd[1607]: time="2025-11-04T20:04:46.395928266Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 20:04:46.398882 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 20:04:46.401626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 20:04:46.403677 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 20:04:46.577243 containerd[1607]: time="2025-11-04T20:04:46.577008757Z" level=info msg="Start subscribing containerd event" Nov 4 20:04:46.577243 containerd[1607]: time="2025-11-04T20:04:46.577097413Z" level=info msg="Start recovering state" Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577364684Z" level=info msg="Start event monitor" Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577395612Z" level=info msg="Start cni network conf syncer for default" Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577418275Z" level=info msg="Start streaming server" Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577430748Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577439464Z" level=info msg="runtime interface starting up..." Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577448562Z" level=info msg="starting plugins..." Nov 4 20:04:46.577566 containerd[1607]: time="2025-11-04T20:04:46.577478027Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 20:04:46.577896 containerd[1607]: time="2025-11-04T20:04:46.577751981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 20:04:46.577896 containerd[1607]: time="2025-11-04T20:04:46.577892304Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 20:04:46.578299 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 20:04:46.580430 containerd[1607]: time="2025-11-04T20:04:46.579159861Z" level=info msg="containerd successfully booted in 0.218577s" Nov 4 20:04:46.696672 tar[1591]: linux-amd64/README.md Nov 4 20:04:46.725169 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 20:04:46.915212 systemd-networkd[1495]: eth0: Gained IPv6LL Nov 4 20:04:46.918225 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 20:04:46.920749 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 20:04:46.924031 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 20:04:46.927204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:04:46.942474 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 20:04:46.966510 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 20:04:46.968984 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 20:04:46.969284 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 20:04:46.972421 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 20:04:48.067498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 20:04:48.070757 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:56276.service - OpenSSH per-connection server daemon (10.0.0.1:56276). Nov 4 20:04:48.163041 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 56276 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:48.164990 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.171279 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 20:04:48.174073 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 20:04:48.182451 systemd-logind[1575]: New session 1 of user core. Nov 4 20:04:48.198551 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 20:04:48.217475 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 20:04:48.236330 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.238514 systemd-logind[1575]: New session 2 of user core. Nov 4 20:04:48.255422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:04:48.257685 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 20:04:48.272325 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 20:04:48.366122 systemd[1704]: Queued start job for default target default.target. Nov 4 20:04:48.376385 systemd[1704]: Created slice app.slice - User Application Slice. Nov 4 20:04:48.376412 systemd[1704]: Reached target paths.target - Paths. Nov 4 20:04:48.376453 systemd[1704]: Reached target timers.target - Timers. Nov 4 20:04:48.377929 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 20:04:48.389534 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 20:04:48.389668 systemd[1704]: Reached target sockets.target - Sockets. Nov 4 20:04:48.389991 systemd[1704]: Reached target basic.target - Basic System. Nov 4 20:04:48.390064 systemd[1704]: Reached target default.target - Main User Target. Nov 4 20:04:48.390098 systemd[1704]: Startup finished in 143ms. Nov 4 20:04:48.390208 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 20:04:48.401177 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 20:04:48.422585 systemd[1]: Startup finished in 2.905s (kernel) + 6.534s (initrd) + 5.314s (userspace) = 14.754s. Nov 4 20:04:48.442263 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:56292.service - OpenSSH per-connection server daemon (10.0.0.1:56292). Nov 4 20:04:48.520619 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56292 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:48.522191 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.526266 systemd-logind[1575]: New session 3 of user core. Nov 4 20:04:48.542159 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 20:04:48.555185 sshd[1731]: Connection closed by 10.0.0.1 port 56292 Nov 4 20:04:48.556375 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 4 20:04:48.564586 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:56292.service: Deactivated successfully. Nov 4 20:04:48.566371 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 20:04:48.567077 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Nov 4 20:04:48.569509 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:56298.service - OpenSSH per-connection server daemon (10.0.0.1:56298). Nov 4 20:04:48.570283 systemd-logind[1575]: Removed session 3. Nov 4 20:04:48.621355 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 56298 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:48.622847 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.626950 systemd-logind[1575]: New session 4 of user core. Nov 4 20:04:48.641142 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 20:04:48.663039 sshd[1742]: Connection closed by 10.0.0.1 port 56298 Nov 4 20:04:48.663556 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 4 20:04:48.676594 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:56298.service: Deactivated successfully. Nov 4 20:04:48.678369 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 20:04:48.679061 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Nov 4 20:04:48.681507 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). Nov 4 20:04:48.682178 systemd-logind[1575]: Removed session 4. Nov 4 20:04:48.735176 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:48.736791 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.742087 systemd-logind[1575]: New session 5 of user core. Nov 4 20:04:48.744994 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 20:04:48.764979 sshd[1753]: Connection closed by 10.0.0.1 port 56308 Nov 4 20:04:48.766682 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Nov 4 20:04:48.784069 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:56308.service: Deactivated successfully. Nov 4 20:04:48.787227 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 20:04:48.788064 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Nov 4 20:04:48.790087 kubelet[1714]: E1104 20:04:48.789988 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 20:04:48.790538 systemd-logind[1575]: Removed session 5. Nov 4 20:04:48.791953 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:56320.service - OpenSSH per-connection server daemon (10.0.0.1:56320). Nov 4 20:04:48.793684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 20:04:48.793871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 20:04:48.794391 systemd[1]: kubelet.service: Consumed 1.664s CPU time, 265.6M memory peak. Nov 4 20:04:48.853342 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 56320 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:48.854713 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:48.858515 systemd-logind[1575]: New session 6 of user core. Nov 4 20:04:48.874127 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 20:04:48.899034 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 20:04:48.899333 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 20:04:48.914451 sudo[1771]: pam_unix(sudo:session): session closed for user root Nov 4 20:04:48.916067 sshd[1770]: Connection closed by 10.0.0.1 port 56320 Nov 4 20:04:48.916500 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 20:04:48.925710 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:56320.service: Deactivated successfully. Nov 4 20:04:48.927438 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 20:04:48.928296 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Nov 4 20:04:48.930953 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:56328.service - OpenSSH per-connection server daemon (10.0.0.1:56328). Nov 4 20:04:48.931589 systemd-logind[1575]: Removed session 6. Nov 4 20:04:49.034458 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 56328 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:49.036414 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:49.041621 systemd-logind[1575]: New session 7 of user core. Nov 4 20:04:49.052196 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 20:04:49.070066 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 20:04:49.070441 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 20:04:49.075806 sudo[1784]: pam_unix(sudo:session): session closed for user root Nov 4 20:04:49.085008 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 20:04:49.085459 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 20:04:49.094879 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 20:04:49.156246 augenrules[1808]: No rules Nov 4 20:04:49.158182 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 20:04:49.158452 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 20:04:49.160172 sudo[1783]: pam_unix(sudo:session): session closed for user root Nov 4 20:04:49.161934 sshd[1782]: Connection closed by 10.0.0.1 port 56328 Nov 4 20:04:49.162427 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 4 20:04:49.174349 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:56328.service: Deactivated successfully. Nov 4 20:04:49.176745 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 20:04:49.178208 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Nov 4 20:04:49.181366 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:56330.service - OpenSSH per-connection server daemon (10.0.0.1:56330). Nov 4 20:04:49.182132 systemd-logind[1575]: Removed session 7. Nov 4 20:04:49.235527 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 56330 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:04:49.236987 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:04:49.241446 systemd-logind[1575]: New session 8 of user core. Nov 4 20:04:49.253130 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 20:04:49.266622 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 20:04:49.266994 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 20:04:50.085365 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 20:04:50.113302 (dockerd)[1843]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 20:04:50.608105 dockerd[1843]: time="2025-11-04T20:04:50.607970549Z" level=info msg="Starting up" Nov 4 20:04:50.609325 dockerd[1843]: time="2025-11-04T20:04:50.609264626Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 20:04:50.635425 dockerd[1843]: time="2025-11-04T20:04:50.635375723Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 20:04:51.031218 dockerd[1843]: time="2025-11-04T20:04:51.031062427Z" level=info msg="Loading containers: start." Nov 4 20:04:51.044061 kernel: Initializing XFRM netlink socket Nov 4 20:04:51.335947 systemd-networkd[1495]: docker0: Link UP Nov 4 20:04:51.342394 dockerd[1843]: time="2025-11-04T20:04:51.342346586Z" level=info msg="Loading containers: done." Nov 4 20:04:51.361114 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3045818700-merged.mount: Deactivated successfully. Nov 4 20:04:51.363446 dockerd[1843]: time="2025-11-04T20:04:51.363400740Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 20:04:51.363532 dockerd[1843]: time="2025-11-04T20:04:51.363510797Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 20:04:51.363633 dockerd[1843]: time="2025-11-04T20:04:51.363614701Z" level=info msg="Initializing buildkit" Nov 4 20:04:51.405358 dockerd[1843]: time="2025-11-04T20:04:51.405312109Z" level=info msg="Completed buildkit initialization" Nov 4 20:04:51.410897 dockerd[1843]: time="2025-11-04T20:04:51.410863079Z" level=info msg="Daemon has completed initialization" Nov 4 20:04:51.411010 dockerd[1843]: time="2025-11-04T20:04:51.410947818Z" level=info msg="API listen on /run/docker.sock" Nov 4 20:04:51.411177 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 20:04:52.617716 containerd[1607]: time="2025-11-04T20:04:52.617632623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 20:04:53.267398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348776519.mount: Deactivated successfully. Nov 4 20:04:54.375962 containerd[1607]: time="2025-11-04T20:04:54.375868553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:54.376684 containerd[1607]: time="2025-11-04T20:04:54.376645259Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 4 20:04:54.377804 containerd[1607]: time="2025-11-04T20:04:54.377775499Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:54.380550 containerd[1607]: time="2025-11-04T20:04:54.380495851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:54.381353 containerd[1607]: time="2025-11-04T20:04:54.381322180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.763598847s" Nov 4 20:04:54.381392 containerd[1607]: time="2025-11-04T20:04:54.381360522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 20:04:54.382116 containerd[1607]: time="2025-11-04T20:04:54.382089118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 20:04:55.748185 containerd[1607]: time="2025-11-04T20:04:55.748077354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:55.749280 containerd[1607]: time="2025-11-04T20:04:55.749211080Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 4 20:04:55.750812 containerd[1607]: time="2025-11-04T20:04:55.750742312Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:55.754173 containerd[1607]: time="2025-11-04T20:04:55.754114175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:55.755125 containerd[1607]: time="2025-11-04T20:04:55.755081620Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.372960671s" Nov 4 20:04:55.755125 containerd[1607]: time="2025-11-04T20:04:55.755118419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 20:04:55.755769 containerd[1607]: time="2025-11-04T20:04:55.755746126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 20:04:57.552092 containerd[1607]: time="2025-11-04T20:04:57.552031221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:57.553142 containerd[1607]: time="2025-11-04T20:04:57.553121767Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 4 20:04:57.554475 containerd[1607]: time="2025-11-04T20:04:57.554433286Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:57.556789 containerd[1607]: time="2025-11-04T20:04:57.556749931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:57.557694 containerd[1607]: time="2025-11-04T20:04:57.557634329Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.80185945s" Nov 4 20:04:57.557694 containerd[1607]: time="2025-11-04T20:04:57.557690184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 20:04:57.558357 containerd[1607]: time="2025-11-04T20:04:57.558332499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 20:04:58.762145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019834501.mount: Deactivated successfully. Nov 4 20:04:59.044466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 20:04:59.047155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:04:59.327896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:04:59.342294 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 20:04:59.461213 containerd[1607]: time="2025-11-04T20:04:59.461167133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:59.462172 containerd[1607]: time="2025-11-04T20:04:59.462118747Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Nov 4 20:04:59.463483 containerd[1607]: time="2025-11-04T20:04:59.463452839Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:59.466152 containerd[1607]: time="2025-11-04T20:04:59.466109161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:04:59.466891 containerd[1607]: time="2025-11-04T20:04:59.466834281Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.908467668s" Nov 4 20:04:59.466891 containerd[1607]: time="2025-11-04T20:04:59.466881509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 20:04:59.467640 containerd[1607]: time="2025-11-04T20:04:59.467608923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 20:04:59.633035 kubelet[2145]: E1104 20:04:59.631505 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 20:04:59.638921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 20:04:59.639135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 20:04:59.639607 systemd[1]: kubelet.service: Consumed 450ms CPU time, 111.4M memory peak. Nov 4 20:05:00.359979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528305886.mount: Deactivated successfully. Nov 4 20:05:01.083988 containerd[1607]: time="2025-11-04T20:05:01.083911828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:01.085209 containerd[1607]: time="2025-11-04T20:05:01.084801116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128597" Nov 4 20:05:01.086324 containerd[1607]: time="2025-11-04T20:05:01.086261956Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:01.089109 containerd[1607]: time="2025-11-04T20:05:01.089073027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:01.090331 containerd[1607]: time="2025-11-04T20:05:01.090259793Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.622610454s" Nov 4 20:05:01.090331 containerd[1607]: time="2025-11-04T20:05:01.090324043Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 20:05:01.090876 containerd[1607]: time="2025-11-04T20:05:01.090848156Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 20:05:01.643871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747621055.mount: Deactivated successfully. Nov 4 20:05:01.650355 containerd[1607]: time="2025-11-04T20:05:01.650309246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 20:05:01.651124 containerd[1607]: time="2025-11-04T20:05:01.651093747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 20:05:01.652235 containerd[1607]: time="2025-11-04T20:05:01.652196535Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 20:05:01.654575 containerd[1607]: time="2025-11-04T20:05:01.654509172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 20:05:01.655385 containerd[1607]: time="2025-11-04T20:05:01.655313090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.429857ms" Nov 4 20:05:01.655422 containerd[1607]: time="2025-11-04T20:05:01.655380246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 20:05:01.656032 containerd[1607]: time="2025-11-04T20:05:01.655984148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 20:05:02.287222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990809897.mount: Deactivated successfully. Nov 4 20:05:04.407108 containerd[1607]: time="2025-11-04T20:05:04.407040992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:04.407906 containerd[1607]: time="2025-11-04T20:05:04.407851141Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Nov 4 20:05:04.409033 containerd[1607]: time="2025-11-04T20:05:04.408978836Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:04.411573 containerd[1607]: time="2025-11-04T20:05:04.411531002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:04.412822 containerd[1607]: time="2025-11-04T20:05:04.412777940Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.756735343s" Nov 4 20:05:04.412822 containerd[1607]: time="2025-11-04T20:05:04.412814479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 20:05:08.489450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:05:08.489668 systemd[1]: kubelet.service: Consumed 450ms CPU time, 111.4M memory peak. Nov 4 20:05:08.492006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:05:08.518345 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-8.scope)... Nov 4 20:05:08.518362 systemd[1]: Reloading... Nov 4 20:05:08.610305 zram_generator::config[2347]: No configuration found. Nov 4 20:05:09.060286 systemd[1]: Reloading finished in 541 ms. Nov 4 20:05:09.132712 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 20:05:09.132809 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 20:05:09.133130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:05:09.133172 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. Nov 4 20:05:09.134669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:05:09.308086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:05:09.313255 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 20:05:09.357174 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 20:05:09.357174 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 20:05:09.357174 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 20:05:09.357577 kubelet[2391]: I1104 20:05:09.357231 2391 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 20:05:09.946272 kubelet[2391]: I1104 20:05:09.946221 2391 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 20:05:09.946272 kubelet[2391]: I1104 20:05:09.946250 2391 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 20:05:09.946492 kubelet[2391]: I1104 20:05:09.946467 2391 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 20:05:09.976318 kubelet[2391]: E1104 20:05:09.976268 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 20:05:09.977220 kubelet[2391]: I1104 20:05:09.977164 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 20:05:09.984052 kubelet[2391]: I1104 20:05:09.983729 2391 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 20:05:09.989945 kubelet[2391]: I1104 20:05:09.989913 2391 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 20:05:09.990302 kubelet[2391]: I1104 20:05:09.990243 2391 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 20:05:09.990515 kubelet[2391]: I1104 20:05:09.990278 2391 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 20:05:09.990632 kubelet[2391]: I1104 20:05:09.990535 2391 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 20:05:09.990632 kubelet[2391]: I1104 20:05:09.990550 2391 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 20:05:09.990754 kubelet[2391]: I1104 20:05:09.990731 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 4 20:05:09.993206 kubelet[2391]: I1104 20:05:09.993176 2391 kubelet.go:480] "Attempting to sync node with API server" Nov 4 20:05:09.993206 kubelet[2391]: I1104 20:05:09.993201 2391 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 20:05:09.993268 kubelet[2391]: I1104 20:05:09.993244 2391 kubelet.go:386] "Adding apiserver pod source" Nov 4 20:05:09.993289 kubelet[2391]: I1104 20:05:09.993270 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 20:05:10.004442 kubelet[2391]: E1104 20:05:10.003702 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 20:05:10.004442 kubelet[2391]: I1104 20:05:10.003806 2391 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 20:05:10.004442 kubelet[2391]: E1104 20:05:10.003878 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 20:05:10.004442 kubelet[2391]: I1104 20:05:10.004353 2391 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 20:05:10.005391 kubelet[2391]: W1104 20:05:10.005360 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 20:05:10.007974 kubelet[2391]: I1104 20:05:10.007955 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 20:05:10.008049 kubelet[2391]: I1104 20:05:10.008038 2391 server.go:1289] "Started kubelet" Nov 4 20:05:10.008897 kubelet[2391]: I1104 20:05:10.008815 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 20:05:10.010788 kubelet[2391]: I1104 20:05:10.010749 2391 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 20:05:10.010846 kubelet[2391]: I1104 20:05:10.010822 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 20:05:10.011834 kubelet[2391]: I1104 20:05:10.011782 2391 server.go:317] "Adding debug handlers to kubelet server" Nov 4 20:05:10.012061 kubelet[2391]: I1104 20:05:10.012044 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 20:05:10.013198 kubelet[2391]: I1104 20:05:10.013169 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 20:05:10.013432 kubelet[2391]: I1104 20:05:10.013404 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 20:05:10.014053 kubelet[2391]: E1104 20:05:10.012928 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874e66cfe6ad3ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 20:05:10.007976876 +0000 UTC m=+0.690000799,LastTimestamp:2025-11-04 20:05:10.007976876 +0000 UTC m=+0.690000799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 20:05:10.014739 kubelet[2391]: E1104 20:05:10.014712 2391 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 20:05:10.014897 kubelet[2391]: E1104 20:05:10.014859 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 20:05:10.014993 kubelet[2391]: I1104 20:05:10.014960 2391 reconciler.go:26] "Reconciler: start to sync state" Nov 4 20:05:10.015187 kubelet[2391]: I1104 20:05:10.015162 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 20:05:10.015737 kubelet[2391]: E1104 20:05:10.015712 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 20:05:10.015737 kubelet[2391]: E1104 20:05:10.015725 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Nov 4 20:05:10.016203 kubelet[2391]: I1104 20:05:10.016185 2391 factory.go:223] Registration of the systemd container factory successfully Nov 4 20:05:10.016297 kubelet[2391]: I1104 20:05:10.016280 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 20:05:10.018031 kubelet[2391]: I1104 20:05:10.017250 2391 factory.go:223] Registration of the containerd container factory successfully Nov 4 20:05:10.031241 kubelet[2391]: I1104 20:05:10.031222 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 20:05:10.031241 kubelet[2391]: I1104 20:05:10.031238 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 20:05:10.031319 kubelet[2391]: I1104 20:05:10.031253 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 4 20:05:10.032754 kubelet[2391]: I1104 20:05:10.032729 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 20:05:10.034208 kubelet[2391]: I1104 20:05:10.034179 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 20:05:10.034253 kubelet[2391]: I1104 20:05:10.034229 2391 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 20:05:10.034434 kubelet[2391]: I1104 20:05:10.034257 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 20:05:10.034473 kubelet[2391]: I1104 20:05:10.034436 2391 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 20:05:10.034555 kubelet[2391]: E1104 20:05:10.034515 2391 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 20:05:10.035261 kubelet[2391]: E1104 20:05:10.035237 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 20:05:10.049148 kubelet[2391]: I1104 20:05:10.049108 2391 policy_none.go:49] "None policy: Start" Nov 4 20:05:10.049148 kubelet[2391]: I1104 20:05:10.049147 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 20:05:10.049275 kubelet[2391]: I1104 20:05:10.049166 2391 state_mem.go:35] "Initializing new in-memory state store" Nov 4 20:05:10.055112 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 20:05:10.074337 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 20:05:10.080095 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 20:05:10.097938 kubelet[2391]: E1104 20:05:10.097910 2391 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 20:05:10.098173 kubelet[2391]: I1104 20:05:10.098147 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 20:05:10.098173 kubelet[2391]: I1104 20:05:10.098167 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 20:05:10.098444 kubelet[2391]: I1104 20:05:10.098414 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 20:05:10.099440 kubelet[2391]: E1104 20:05:10.099394 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 20:05:10.099558 kubelet[2391]: E1104 20:05:10.099456 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 20:05:10.146167 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 20:05:10.158862 kubelet[2391]: E1104 20:05:10.158819 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:10.162167 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 20:05:10.163805 kubelet[2391]: E1104 20:05:10.163784 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:10.165412 systemd[1]: Created slice kubepods-burstable-pod05ae9ee89802c057065be6d87df4bdd4.slice - libcontainer container kubepods-burstable-pod05ae9ee89802c057065be6d87df4bdd4.slice. Nov 4 20:05:10.166873 kubelet[2391]: E1104 20:05:10.166834 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:10.199819 kubelet[2391]: I1104 20:05:10.199743 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 20:05:10.200118 kubelet[2391]: E1104 20:05:10.200081 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 4 20:05:10.216419 kubelet[2391]: I1104 20:05:10.216399 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:10.216471 kubelet[2391]: I1104 20:05:10.216425 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:10.216471 kubelet[2391]: I1104 20:05:10.216447 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:10.216471 kubelet[2391]: I1104 20:05:10.216464 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:10.216549 kubelet[2391]: I1104 20:05:10.216481 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:10.216549 kubelet[2391]: I1104 20:05:10.216496 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:10.216549 kubelet[2391]: I1104 20:05:10.216512 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:10.216549 kubelet[2391]: I1104 20:05:10.216544 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:10.216633 kubelet[2391]: I1104 20:05:10.216594 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:10.216759 kubelet[2391]: E1104 20:05:10.216729 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Nov 4 20:05:10.401544 kubelet[2391]: I1104 20:05:10.401509 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 20:05:10.401894 kubelet[2391]: E1104 20:05:10.401819 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Nov 4 20:05:10.459678 kubelet[2391]: E1104 20:05:10.459573 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.460221 containerd[1607]: time="2025-11-04T20:05:10.460165451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:10.464428 kubelet[2391]: E1104 20:05:10.464407 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.464685 containerd[1607]: time="2025-11-04T20:05:10.464658216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:10.468107 kubelet[2391]: E1104 20:05:10.468072 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.468529 containerd[1607]: time="2025-11-04T20:05:10.468489381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05ae9ee89802c057065be6d87df4bdd4,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:10.496747 containerd[1607]: time="2025-11-04T20:05:10.496686771Z" level=info msg="connecting to shim ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53" address="unix:///run/containerd/s/334c5e119cbfc87a09731d93a26f8d09cc2c0c7b4c612ec144890f1b3b482615" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:10.510054 containerd[1607]: time="2025-11-04T20:05:10.509438415Z" level=info msg="connecting to shim 125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d" address="unix:///run/containerd/s/333d5c146b48661079d5e72c8d5e4a4e5d5c8207c75281cce45ba592f033cc8a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:10.521200 containerd[1607]: time="2025-11-04T20:05:10.521149698Z" level=info msg="connecting to shim 4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82" address="unix:///run/containerd/s/a6b65295768b246e97e28db0b6351a0eb7e41e93f977a12d9d97edb964f387fb" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:10.528158 systemd[1]: Started cri-containerd-ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53.scope - libcontainer container ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53. Nov 4 20:05:10.531669 systemd[1]: Started cri-containerd-125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d.scope - libcontainer container 125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d. Nov 4 20:05:10.551338 systemd[1]: Started cri-containerd-4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82.scope - libcontainer container 4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82. Nov 4 20:05:10.582345 containerd[1607]: time="2025-11-04T20:05:10.581899225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53\"" Nov 4 20:05:10.585024 kubelet[2391]: E1104 20:05:10.584130 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.590632 containerd[1607]: time="2025-11-04T20:05:10.590600974Z" level=info msg="CreateContainer within sandbox \"ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 20:05:10.599134 containerd[1607]: time="2025-11-04T20:05:10.599098610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d\"" Nov 4 20:05:10.599894 kubelet[2391]: E1104 20:05:10.599864 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.600900 containerd[1607]: time="2025-11-04T20:05:10.600876564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05ae9ee89802c057065be6d87df4bdd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82\"" Nov 4 20:05:10.601319 kubelet[2391]: E1104 20:05:10.601298 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:10.603196 containerd[1607]: time="2025-11-04T20:05:10.603147192Z" level=info msg="Container e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:10.603936 containerd[1607]: time="2025-11-04T20:05:10.603882041Z" level=info msg="CreateContainer within sandbox \"125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 20:05:10.606372 containerd[1607]: time="2025-11-04T20:05:10.606344909Z" level=info msg="CreateContainer within sandbox \"4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 20:05:10.610794 containerd[1607]: time="2025-11-04T20:05:10.610763125Z" level=info msg="CreateContainer within sandbox \"ab8a5ffd248e40fd33206a516099b483774ba1afa73b53720e677686b80d8b53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc\"" Nov 4 20:05:10.611339 containerd[1607]: time="2025-11-04T20:05:10.611296155Z" level=info msg="StartContainer for \"e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc\"" Nov 4 20:05:10.612441 containerd[1607]: time="2025-11-04T20:05:10.612416706Z" level=info msg="connecting to shim e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc" address="unix:///run/containerd/s/334c5e119cbfc87a09731d93a26f8d09cc2c0c7b4c612ec144890f1b3b482615" protocol=ttrpc version=3 Nov 4 20:05:10.618087 kubelet[2391]: E1104 20:05:10.618055 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Nov 4 20:05:10.618286 containerd[1607]: time="2025-11-04T20:05:10.618249915Z" level=info msg="Container 59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:10.620095 containerd[1607]: time="2025-11-04T20:05:10.620065100Z" level=info msg="Container f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:10.625349 containerd[1607]: time="2025-11-04T20:05:10.625320195Z" level=info msg="CreateContainer within sandbox \"125e441f223c7178b2c406cc725f0d3002d6bb60e84aca1e74be408b33b0d31d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9\"" Nov 4 20:05:10.625729 containerd[1607]: time="2025-11-04T20:05:10.625711298Z" level=info msg="StartContainer for \"59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9\"" Nov 4 20:05:10.626844 containerd[1607]: time="2025-11-04T20:05:10.626655810Z" level=info msg="connecting to shim 59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9" address="unix:///run/containerd/s/333d5c146b48661079d5e72c8d5e4a4e5d5c8207c75281cce45ba592f033cc8a" protocol=ttrpc version=3 Nov 4 20:05:10.628413 containerd[1607]: time="2025-11-04T20:05:10.628384211Z" level=info msg="CreateContainer within sandbox \"4154b40d3f09941e3c48383fcc4db0297545d5c20d3226ad3fa600358a013a82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3\"" Nov 4 20:05:10.628779 containerd[1607]: time="2025-11-04T20:05:10.628746951Z" level=info msg="StartContainer for \"f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3\"" Nov 4 20:05:10.630022 containerd[1607]: time="2025-11-04T20:05:10.629973301Z" level=info msg="connecting to shim f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3" address="unix:///run/containerd/s/a6b65295768b246e97e28db0b6351a0eb7e41e93f977a12d9d97edb964f387fb" protocol=ttrpc version=3 Nov 4 20:05:10.632157 systemd[1]: Started cri-containerd-e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc.scope - libcontainer container e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc. Nov 4 20:05:10.655338 systemd[1]: Started cri-containerd-f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3.scope - libcontainer container f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3. Nov 4 20:05:10.658575 systemd[1]: Started cri-containerd-59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9.scope - libcontainer container 59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9. Nov 4 20:05:10.705375 containerd[1607]: time="2025-11-04T20:05:10.705329391Z" level=info msg="StartContainer for \"e320073bc5d4fa1047c224623b6451e30d95cd962ad96554f8ee9c1372f737fc\" returns successfully" Nov 4 20:05:10.710384 containerd[1607]: time="2025-11-04T20:05:10.710293099Z" level=info msg="StartContainer for \"f5990475ad25c31bbdf86172b95d80fa86882e558261db581c646e2df71969f3\" returns successfully" Nov 4 20:05:10.719415 containerd[1607]: time="2025-11-04T20:05:10.719052296Z" level=info msg="StartContainer for \"59cbe9da642d9190ef431b6af7963d4258147c0104b145357c4731dbcc6e11d9\" returns successfully" Nov 4 20:05:10.803094 kubelet[2391]: I1104 20:05:10.803052 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 20:05:11.048073 kubelet[2391]: E1104 20:05:11.047771 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:11.048073 kubelet[2391]: E1104 20:05:11.047992 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:11.051038 kubelet[2391]: E1104 20:05:11.050996 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:11.052030 kubelet[2391]: E1104 20:05:11.051144 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:11.053710 kubelet[2391]: E1104 20:05:11.053685 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:11.053812 kubelet[2391]: E1104 20:05:11.053792 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:12.056718 kubelet[2391]: E1104 20:05:12.056669 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:12.057267 kubelet[2391]: E1104 20:05:12.056860 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:12.057267 kubelet[2391]: E1104 20:05:12.056887 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 20:05:12.057267 kubelet[2391]: E1104 20:05:12.057115 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:12.661108 kubelet[2391]: E1104 20:05:12.661055 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 20:05:12.754491 kubelet[2391]: I1104 20:05:12.754447 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 20:05:12.754741 kubelet[2391]: E1104 20:05:12.754624 2391 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 20:05:12.816601 kubelet[2391]: I1104 20:05:12.816555 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:12.821119 kubelet[2391]: E1104 20:05:12.821093 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:12.821119 kubelet[2391]: I1104 20:05:12.821113 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:12.822410 kubelet[2391]: E1104 20:05:12.822389 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:12.822410 kubelet[2391]: I1104 20:05:12.822406 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:12.823710 kubelet[2391]: E1104 20:05:12.823682 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:13.000367 kubelet[2391]: I1104 20:05:13.000248 2391 apiserver.go:52] "Watching apiserver" Nov 4 20:05:13.015703 kubelet[2391]: I1104 20:05:13.015637 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 20:05:13.056311 kubelet[2391]: I1104 20:05:13.056285 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:13.057831 kubelet[2391]: E1104 20:05:13.057800 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:13.058191 kubelet[2391]: E1104 20:05:13.057947 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:14.440870 kubelet[2391]: I1104 20:05:14.440829 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:14.444970 kubelet[2391]: E1104 20:05:14.444943 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:14.760318 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-8.scope)... Nov 4 20:05:14.760337 systemd[1]: Reloading... Nov 4 20:05:14.859061 zram_generator::config[2718]: No configuration found. Nov 4 20:05:15.060193 kubelet[2391]: E1104 20:05:15.060064 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:15.114545 systemd[1]: Reloading finished in 353 ms. Nov 4 20:05:15.141006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:05:15.168200 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 20:05:15.168524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:05:15.168571 systemd[1]: kubelet.service: Consumed 1.191s CPU time, 131.4M memory peak. Nov 4 20:05:15.170366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 20:05:15.388690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 20:05:15.400396 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 20:05:15.447097 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 20:05:15.447097 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 20:05:15.447097 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 20:05:15.447487 kubelet[2763]: I1104 20:05:15.447126 2763 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 20:05:15.455501 kubelet[2763]: I1104 20:05:15.455469 2763 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 20:05:15.455501 kubelet[2763]: I1104 20:05:15.455489 2763 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 20:05:15.455663 kubelet[2763]: I1104 20:05:15.455643 2763 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 20:05:15.456649 kubelet[2763]: I1104 20:05:15.456625 2763 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 20:05:15.458723 kubelet[2763]: I1104 20:05:15.458680 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 20:05:15.463431 kubelet[2763]: I1104 20:05:15.463400 2763 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 20:05:15.467934 kubelet[2763]: I1104 20:05:15.467897 2763 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 20:05:15.468207 kubelet[2763]: I1104 20:05:15.468167 2763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 20:05:15.468341 kubelet[2763]: I1104 20:05:15.468196 2763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 20:05:15.468423 kubelet[2763]: I1104 20:05:15.468345 2763 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 20:05:15.468423 kubelet[2763]: I1104 20:05:15.468354 2763 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 20:05:15.468423 kubelet[2763]: I1104 20:05:15.468398 2763 state_mem.go:36] "Initialized new in-memory state store" Nov 4 20:05:15.468592 kubelet[2763]: I1104 20:05:15.468574 2763 kubelet.go:480] "Attempting to sync node with API server" Nov 4 20:05:15.468592 kubelet[2763]: I1104 20:05:15.468589 2763 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 20:05:15.468639 kubelet[2763]: I1104 20:05:15.468611 2763 kubelet.go:386] "Adding apiserver pod source" Nov 4 20:05:15.468639 kubelet[2763]: I1104 20:05:15.468628 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 20:05:15.469425 kubelet[2763]: I1104 20:05:15.469403 2763 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 20:05:15.470038 kubelet[2763]: I1104 20:05:15.469821 2763 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 20:05:15.476817 kubelet[2763]: I1104 20:05:15.473438 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 20:05:15.476817 kubelet[2763]: I1104 20:05:15.473487 2763 server.go:1289] "Started kubelet" Nov 4 20:05:15.476817 kubelet[2763]: I1104 20:05:15.473566 2763 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 20:05:15.476817 kubelet[2763]: I1104 20:05:15.474570 2763 server.go:317] "Adding debug handlers to kubelet server" Nov 4 20:05:15.476817 kubelet[2763]: I1104 20:05:15.476337 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 20:05:15.477719 kubelet[2763]: I1104 20:05:15.477659 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 20:05:15.480035 kubelet[2763]: I1104 20:05:15.478307 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 20:05:15.480035 kubelet[2763]: I1104 20:05:15.478547 2763 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 20:05:15.480656 kubelet[2763]: I1104 20:05:15.480631 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 20:05:15.481062 kubelet[2763]: E1104 20:05:15.481038 2763 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 20:05:15.481610 kubelet[2763]: I1104 20:05:15.481589 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 20:05:15.481752 kubelet[2763]: I1104 20:05:15.481735 2763 reconciler.go:26] "Reconciler: start to sync state" Nov 4 20:05:15.486782 kubelet[2763]: I1104 20:05:15.486766 2763 factory.go:223] Registration of the systemd container factory successfully Nov 4 20:05:15.486957 kubelet[2763]: I1104 20:05:15.486938 2763 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 20:05:15.488828 kubelet[2763]: E1104 20:05:15.488799 2763 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 20:05:15.489238 kubelet[2763]: I1104 20:05:15.489224 2763 factory.go:223] Registration of the containerd container factory successfully Nov 4 20:05:15.490261 kubelet[2763]: I1104 20:05:15.490217 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 20:05:15.497587 kubelet[2763]: I1104 20:05:15.497557 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 20:05:15.497643 kubelet[2763]: I1104 20:05:15.497601 2763 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 20:05:15.497643 kubelet[2763]: I1104 20:05:15.497624 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 20:05:15.497643 kubelet[2763]: I1104 20:05:15.497632 2763 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 20:05:15.497812 kubelet[2763]: E1104 20:05:15.497778 2763 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 20:05:15.524488 kubelet[2763]: I1104 20:05:15.524452 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 20:05:15.524488 kubelet[2763]: I1104 20:05:15.524468 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 20:05:15.524488 kubelet[2763]: I1104 20:05:15.524487 2763 state_mem.go:36] "Initialized new in-memory state store" Nov 4 20:05:15.524652 kubelet[2763]: I1104 20:05:15.524628 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 20:05:15.524684 kubelet[2763]: I1104 20:05:15.524643 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 20:05:15.524684 kubelet[2763]: I1104 20:05:15.524660 2763 policy_none.go:49] "None policy: Start" Nov 4 20:05:15.524684 kubelet[2763]: I1104 20:05:15.524669 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 20:05:15.524684 kubelet[2763]: I1104 20:05:15.524679 2763 state_mem.go:35] "Initializing new in-memory state store" Nov 4 20:05:15.524802 kubelet[2763]: I1104 20:05:15.524761 2763 state_mem.go:75] "Updated machine memory state" Nov 4 20:05:15.531835 kubelet[2763]: E1104 20:05:15.531798 2763 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 20:05:15.531986 kubelet[2763]: I1104 20:05:15.531964 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 20:05:15.532069 kubelet[2763]: I1104 20:05:15.531983 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 20:05:15.532273 kubelet[2763]: I1104 20:05:15.532258 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 20:05:15.532932 kubelet[2763]: E1104 20:05:15.532915 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 20:05:15.598830 kubelet[2763]: I1104 20:05:15.598794 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:15.598917 kubelet[2763]: I1104 20:05:15.598887 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:15.599056 kubelet[2763]: I1104 20:05:15.599009 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.604695 kubelet[2763]: E1104 20:05:15.604670 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:15.638107 kubelet[2763]: I1104 20:05:15.638086 2763 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 20:05:15.643378 kubelet[2763]: I1104 20:05:15.643295 2763 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 20:05:15.643378 kubelet[2763]: I1104 20:05:15.643380 2763 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 20:05:15.783156 kubelet[2763]: I1104 20:05:15.783102 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:15.783156 kubelet[2763]: I1104 20:05:15.783140 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:15.783271 kubelet[2763]: I1104 20:05:15.783165 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05ae9ee89802c057065be6d87df4bdd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05ae9ee89802c057065be6d87df4bdd4\") " pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:15.783271 kubelet[2763]: I1104 20:05:15.783183 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.783271 kubelet[2763]: I1104 20:05:15.783201 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.783271 kubelet[2763]: I1104 20:05:15.783217 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:15.783393 kubelet[2763]: I1104 20:05:15.783282 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.783393 kubelet[2763]: I1104 20:05:15.783314 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.783446 kubelet[2763]: I1104 20:05:15.783389 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 20:05:15.904770 kubelet[2763]: E1104 20:05:15.904367 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:15.904770 kubelet[2763]: E1104 20:05:15.904655 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:15.904859 kubelet[2763]: E1104 20:05:15.904814 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:16.470072 kubelet[2763]: I1104 20:05:16.470036 2763 apiserver.go:52] "Watching apiserver" Nov 4 20:05:16.482668 kubelet[2763]: I1104 20:05:16.482626 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 20:05:16.511796 kubelet[2763]: I1104 20:05:16.511748 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:16.512144 kubelet[2763]: E1104 20:05:16.511914 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:16.512247 kubelet[2763]: I1104 20:05:16.512068 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:16.519058 kubelet[2763]: E1104 20:05:16.518539 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 20:05:16.519058 kubelet[2763]: E1104 20:05:16.518689 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:16.519058 kubelet[2763]: E1104 20:05:16.518980 2763 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 20:05:16.519205 kubelet[2763]: E1104 20:05:16.519118 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:16.530197 kubelet[2763]: I1104 20:05:16.530122 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.530078945 podStartE2EDuration="2.530078945s" podCreationTimestamp="2025-11-04 20:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:16.529619544 +0000 UTC m=+1.124317127" watchObservedRunningTime="2025-11-04 20:05:16.530078945 +0000 UTC m=+1.124776528" Nov 4 20:05:16.536594 kubelet[2763]: I1104 20:05:16.536550 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.536535333 podStartE2EDuration="1.536535333s" podCreationTimestamp="2025-11-04 20:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:16.536461805 +0000 UTC m=+1.131159388" watchObservedRunningTime="2025-11-04 20:05:16.536535333 +0000 UTC m=+1.131232916" Nov 4 20:05:16.543601 kubelet[2763]: I1104 20:05:16.543544 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.543521765 podStartE2EDuration="1.543521765s" podCreationTimestamp="2025-11-04 20:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:16.543165768 +0000 UTC m=+1.137863351" watchObservedRunningTime="2025-11-04 20:05:16.543521765 +0000 UTC m=+1.138219348" Nov 4 20:05:17.513661 kubelet[2763]: E1104 20:05:17.513618 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:17.514159 kubelet[2763]: E1104 20:05:17.514141 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:20.656673 kubelet[2763]: I1104 20:05:20.656629 2763 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 20:05:20.657219 containerd[1607]: time="2025-11-04T20:05:20.657089901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 20:05:20.657448 kubelet[2763]: I1104 20:05:20.657306 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 20:05:21.720814 systemd[1]: Created slice kubepods-besteffort-podbf58f5f2_ff90_458f_b3b1_f9f64efd4be0.slice - libcontainer container kubepods-besteffort-podbf58f5f2_ff90_458f_b3b1_f9f64efd4be0.slice. Nov 4 20:05:21.723215 kubelet[2763]: I1104 20:05:21.723173 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf58f5f2-ff90-458f-b3b1-f9f64efd4be0-kube-proxy\") pod \"kube-proxy-hkflp\" (UID: \"bf58f5f2-ff90-458f-b3b1-f9f64efd4be0\") " pod="kube-system/kube-proxy-hkflp" Nov 4 20:05:21.723215 kubelet[2763]: I1104 20:05:21.723203 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf58f5f2-ff90-458f-b3b1-f9f64efd4be0-xtables-lock\") pod \"kube-proxy-hkflp\" (UID: \"bf58f5f2-ff90-458f-b3b1-f9f64efd4be0\") " pod="kube-system/kube-proxy-hkflp" Nov 4 20:05:21.723215 kubelet[2763]: I1104 20:05:21.723218 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf58f5f2-ff90-458f-b3b1-f9f64efd4be0-lib-modules\") pod \"kube-proxy-hkflp\" (UID: \"bf58f5f2-ff90-458f-b3b1-f9f64efd4be0\") " pod="kube-system/kube-proxy-hkflp" Nov 4 20:05:21.723533 kubelet[2763]: I1104 20:05:21.723232 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8wq9\" (UniqueName: \"kubernetes.io/projected/bf58f5f2-ff90-458f-b3b1-f9f64efd4be0-kube-api-access-n8wq9\") pod \"kube-proxy-hkflp\" (UID: \"bf58f5f2-ff90-458f-b3b1-f9f64efd4be0\") " pod="kube-system/kube-proxy-hkflp" Nov 4 20:05:21.922719 systemd[1]: Created slice kubepods-besteffort-podecc1c454_bae7_4774_a1cf_2e6d3d20ffee.slice - libcontainer container kubepods-besteffort-podecc1c454_bae7_4774_a1cf_2e6d3d20ffee.slice. Nov 4 20:05:21.924539 kubelet[2763]: I1104 20:05:21.924512 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ecc1c454-bae7-4774-a1cf-2e6d3d20ffee-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ml82p\" (UID: \"ecc1c454-bae7-4774-a1cf-2e6d3d20ffee\") " pod="tigera-operator/tigera-operator-7dcd859c48-ml82p" Nov 4 20:05:21.925146 kubelet[2763]: I1104 20:05:21.925121 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjfb\" (UniqueName: \"kubernetes.io/projected/ecc1c454-bae7-4774-a1cf-2e6d3d20ffee-kube-api-access-rjjfb\") pod \"tigera-operator-7dcd859c48-ml82p\" (UID: \"ecc1c454-bae7-4774-a1cf-2e6d3d20ffee\") " pod="tigera-operator/tigera-operator-7dcd859c48-ml82p" Nov 4 20:05:22.032470 kubelet[2763]: E1104 20:05:22.031906 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:22.032794 containerd[1607]: time="2025-11-04T20:05:22.032748748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkflp,Uid:bf58f5f2-ff90-458f-b3b1-f9f64efd4be0,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:22.079902 containerd[1607]: time="2025-11-04T20:05:22.079844997Z" level=info msg="connecting to shim 9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197" address="unix:///run/containerd/s/cfe27bd946622f818bf7d60d95ad8b5ab41b09345d4112d53b35a69ece1372ea" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:22.126107 kubelet[2763]: E1104 20:05:22.126009 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:22.163234 systemd[1]: Started cri-containerd-9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197.scope - libcontainer container 9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197. Nov 4 20:05:22.195134 containerd[1607]: time="2025-11-04T20:05:22.195090349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hkflp,Uid:bf58f5f2-ff90-458f-b3b1-f9f64efd4be0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197\"" Nov 4 20:05:22.196004 kubelet[2763]: E1104 20:05:22.195966 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:22.200811 containerd[1607]: time="2025-11-04T20:05:22.200781350Z" level=info msg="CreateContainer within sandbox \"9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 20:05:22.212801 containerd[1607]: time="2025-11-04T20:05:22.212738486Z" level=info msg="Container b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:22.222083 containerd[1607]: time="2025-11-04T20:05:22.222030719Z" level=info msg="CreateContainer within sandbox \"9289edbafb22ec278534330907d05107f554431baf9d8b72f3cd1723dcd8e197\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e\"" Nov 4 20:05:22.222552 containerd[1607]: time="2025-11-04T20:05:22.222530402Z" level=info msg="StartContainer for \"b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e\"" Nov 4 20:05:22.223801 containerd[1607]: time="2025-11-04T20:05:22.223767892Z" level=info msg="connecting to shim b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e" address="unix:///run/containerd/s/cfe27bd946622f818bf7d60d95ad8b5ab41b09345d4112d53b35a69ece1372ea" protocol=ttrpc version=3 Nov 4 20:05:22.226528 containerd[1607]: time="2025-11-04T20:05:22.226486114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ml82p,Uid:ecc1c454-bae7-4774-a1cf-2e6d3d20ffee,Namespace:tigera-operator,Attempt:0,}" Nov 4 20:05:22.247383 systemd[1]: Started cri-containerd-b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e.scope - libcontainer container b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e. Nov 4 20:05:22.250323 containerd[1607]: time="2025-11-04T20:05:22.250273360Z" level=info msg="connecting to shim ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd" address="unix:///run/containerd/s/76b6fc8d3136a3fa6017aa2aa5b8429ace387707604b122d0677d67eedbae48a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:22.279138 systemd[1]: Started cri-containerd-ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd.scope - libcontainer container ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd. Nov 4 20:05:22.306988 containerd[1607]: time="2025-11-04T20:05:22.306567836Z" level=info msg="StartContainer for \"b3e36dbf974e02552527d3715e32449c3276379ec10aec98b918d5ce16ce1f6e\" returns successfully" Nov 4 20:05:22.327752 containerd[1607]: time="2025-11-04T20:05:22.327700568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ml82p,Uid:ecc1c454-bae7-4774-a1cf-2e6d3d20ffee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd\"" Nov 4 20:05:22.330135 containerd[1607]: time="2025-11-04T20:05:22.330006613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 20:05:22.522653 kubelet[2763]: E1104 20:05:22.522608 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:22.523180 kubelet[2763]: E1104 20:05:22.523152 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:22.534417 kubelet[2763]: I1104 20:05:22.534363 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hkflp" podStartSLOduration=1.534345395 podStartE2EDuration="1.534345395s" podCreationTimestamp="2025-11-04 20:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:22.534105797 +0000 UTC m=+7.128803380" watchObservedRunningTime="2025-11-04 20:05:22.534345395 +0000 UTC m=+7.129042978" Nov 4 20:05:22.835456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994861156.mount: Deactivated successfully. Nov 4 20:05:23.525917 kubelet[2763]: E1104 20:05:23.525884 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:24.102771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799172859.mount: Deactivated successfully. Nov 4 20:05:24.436061 containerd[1607]: time="2025-11-04T20:05:24.435878830Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:24.436794 containerd[1607]: time="2025-11-04T20:05:24.436726262Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Nov 4 20:05:24.437927 containerd[1607]: time="2025-11-04T20:05:24.437889995Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:24.439850 containerd[1607]: time="2025-11-04T20:05:24.439802337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:24.440464 containerd[1607]: time="2025-11-04T20:05:24.440414259Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.110239854s" Nov 4 20:05:24.440464 containerd[1607]: time="2025-11-04T20:05:24.440459604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 20:05:24.444772 containerd[1607]: time="2025-11-04T20:05:24.444730109Z" level=info msg="CreateContainer within sandbox \"ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 20:05:24.453570 containerd[1607]: time="2025-11-04T20:05:24.453514754Z" level=info msg="Container 5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:24.458695 containerd[1607]: time="2025-11-04T20:05:24.458660554Z" level=info msg="CreateContainer within sandbox \"ade3b8d101aa227dd851252994a88f368df2585aeaf0f10e6bf79bc37fd22dcd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302\"" Nov 4 20:05:24.459245 containerd[1607]: time="2025-11-04T20:05:24.459208777Z" level=info msg="StartContainer for \"5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302\"" Nov 4 20:05:24.460172 containerd[1607]: time="2025-11-04T20:05:24.460147190Z" level=info msg="connecting to shim 5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302" address="unix:///run/containerd/s/76b6fc8d3136a3fa6017aa2aa5b8429ace387707604b122d0677d67eedbae48a" protocol=ttrpc version=3 Nov 4 20:05:24.483252 systemd[1]: Started cri-containerd-5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302.scope - libcontainer container 5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302. Nov 4 20:05:24.514122 containerd[1607]: time="2025-11-04T20:05:24.514071150Z" level=info msg="StartContainer for \"5cf3925d62fefca6be44fec2a571e27d55cf3aa6a9585bbe307c4dec747d9302\" returns successfully" Nov 4 20:05:25.377220 kubelet[2763]: E1104 20:05:25.377187 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:25.457492 kubelet[2763]: I1104 20:05:25.457417 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ml82p" podStartSLOduration=2.345127015 podStartE2EDuration="4.457395216s" podCreationTimestamp="2025-11-04 20:05:21 +0000 UTC" firstStartedPulling="2025-11-04 20:05:22.328952575 +0000 UTC m=+6.923650148" lastFinishedPulling="2025-11-04 20:05:24.441220766 +0000 UTC m=+9.035918349" observedRunningTime="2025-11-04 20:05:24.537939914 +0000 UTC m=+9.132637497" watchObservedRunningTime="2025-11-04 20:05:25.457395216 +0000 UTC m=+10.052092799" Nov 4 20:05:25.532387 kubelet[2763]: E1104 20:05:25.532354 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:25.959929 kubelet[2763]: E1104 20:05:25.959879 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:26.534678 kubelet[2763]: E1104 20:05:26.534626 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:29.725481 sudo[1822]: pam_unix(sudo:session): session closed for user root Nov 4 20:05:29.726789 sshd[1821]: Connection closed by 10.0.0.1 port 56330 Nov 4 20:05:29.729232 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Nov 4 20:05:29.736181 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Nov 4 20:05:29.738124 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:56330.service: Deactivated successfully. Nov 4 20:05:29.743382 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 20:05:29.744231 systemd[1]: session-8.scope: Consumed 6.635s CPU time, 219.9M memory peak. Nov 4 20:05:29.749055 systemd-logind[1575]: Removed session 8. Nov 4 20:05:30.669332 update_engine[1579]: I20251104 20:05:30.668072 1579 update_attempter.cc:509] Updating boot flags... Nov 4 20:05:33.868548 systemd[1]: Created slice kubepods-besteffort-pod2add14e4_53a6_4ab9_b242_346c7ba94771.slice - libcontainer container kubepods-besteffort-pod2add14e4_53a6_4ab9_b242_346c7ba94771.slice. Nov 4 20:05:33.899408 kubelet[2763]: I1104 20:05:33.899351 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2add14e4-53a6-4ab9-b242-346c7ba94771-typha-certs\") pod \"calico-typha-6c5f75f888-wqwtb\" (UID: \"2add14e4-53a6-4ab9-b242-346c7ba94771\") " pod="calico-system/calico-typha-6c5f75f888-wqwtb" Nov 4 20:05:33.899408 kubelet[2763]: I1104 20:05:33.899402 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2add14e4-53a6-4ab9-b242-346c7ba94771-tigera-ca-bundle\") pod \"calico-typha-6c5f75f888-wqwtb\" (UID: \"2add14e4-53a6-4ab9-b242-346c7ba94771\") " pod="calico-system/calico-typha-6c5f75f888-wqwtb" Nov 4 20:05:33.899408 kubelet[2763]: I1104 20:05:33.899423 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sj2d\" (UniqueName: \"kubernetes.io/projected/2add14e4-53a6-4ab9-b242-346c7ba94771-kube-api-access-7sj2d\") pod \"calico-typha-6c5f75f888-wqwtb\" (UID: \"2add14e4-53a6-4ab9-b242-346c7ba94771\") " pod="calico-system/calico-typha-6c5f75f888-wqwtb" Nov 4 20:05:33.923059 systemd[1]: Created slice kubepods-besteffort-podef10a5d9_2eb8_402b_9364_7d6a1da36750.slice - libcontainer container kubepods-besteffort-podef10a5d9_2eb8_402b_9364_7d6a1da36750.slice. Nov 4 20:05:34.000246 kubelet[2763]: I1104 20:05:34.000155 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-policysync\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000246 kubelet[2763]: I1104 20:05:34.000247 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef10a5d9-2eb8-402b-9364-7d6a1da36750-tigera-ca-bundle\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000420 kubelet[2763]: I1104 20:05:34.000285 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-var-lib-calico\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000420 kubelet[2763]: I1104 20:05:34.000350 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-var-run-calico\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000479 kubelet[2763]: I1104 20:05:34.000420 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrlv\" (UniqueName: \"kubernetes.io/projected/ef10a5d9-2eb8-402b-9364-7d6a1da36750-kube-api-access-gnrlv\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000479 kubelet[2763]: I1104 20:05:34.000458 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ef10a5d9-2eb8-402b-9364-7d6a1da36750-node-certs\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000656 kubelet[2763]: I1104 20:05:34.000581 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-cni-log-dir\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000656 kubelet[2763]: I1104 20:05:34.000654 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-cni-bin-dir\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000713 kubelet[2763]: I1104 20:05:34.000687 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-lib-modules\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000713 kubelet[2763]: I1104 20:05:34.000703 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-xtables-lock\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000754 kubelet[2763]: I1104 20:05:34.000722 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-flexvol-driver-host\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.000754 kubelet[2763]: I1104 20:05:34.000752 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ef10a5d9-2eb8-402b-9364-7d6a1da36750-cni-net-dir\") pod \"calico-node-5jh27\" (UID: \"ef10a5d9-2eb8-402b-9364-7d6a1da36750\") " pod="calico-system/calico-node-5jh27" Nov 4 20:05:34.110235 kubelet[2763]: E1104 20:05:34.110200 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.110686 kubelet[2763]: W1104 20:05:34.110401 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.110686 kubelet[2763]: E1104 20:05:34.110640 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.127276 kubelet[2763]: E1104 20:05:34.126729 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:34.129214 kubelet[2763]: E1104 20:05:34.129154 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.129214 kubelet[2763]: W1104 20:05:34.129180 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.129214 kubelet[2763]: E1104 20:05:34.129202 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.178413 kubelet[2763]: E1104 20:05:34.178347 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:34.179143 containerd[1607]: time="2025-11-04T20:05:34.179073272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5f75f888-wqwtb,Uid:2add14e4-53a6-4ab9-b242-346c7ba94771,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:34.192374 kubelet[2763]: E1104 20:05:34.192335 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.192374 kubelet[2763]: W1104 20:05:34.192361 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.192374 kubelet[2763]: E1104 20:05:34.192386 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.192588 kubelet[2763]: E1104 20:05:34.192568 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.192588 kubelet[2763]: W1104 20:05:34.192575 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.192588 kubelet[2763]: E1104 20:05:34.192582 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.192826 kubelet[2763]: E1104 20:05:34.192800 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.192826 kubelet[2763]: W1104 20:05:34.192811 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.192826 kubelet[2763]: E1104 20:05:34.192819 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.193087 kubelet[2763]: E1104 20:05:34.193068 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.193087 kubelet[2763]: W1104 20:05:34.193079 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.193087 kubelet[2763]: E1104 20:05:34.193089 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.193367 kubelet[2763]: E1104 20:05:34.193295 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.193367 kubelet[2763]: W1104 20:05:34.193305 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.193367 kubelet[2763]: E1104 20:05:34.193313 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.193667 kubelet[2763]: E1104 20:05:34.193541 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.193667 kubelet[2763]: W1104 20:05:34.193548 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.193667 kubelet[2763]: E1104 20:05:34.193555 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.193758 kubelet[2763]: E1104 20:05:34.193716 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.193758 kubelet[2763]: W1104 20:05:34.193722 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.193758 kubelet[2763]: E1104 20:05:34.193730 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.193904 kubelet[2763]: E1104 20:05:34.193889 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.193904 kubelet[2763]: W1104 20:05:34.193899 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.193967 kubelet[2763]: E1104 20:05:34.193906 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.194106 kubelet[2763]: E1104 20:05:34.194091 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.194106 kubelet[2763]: W1104 20:05:34.194101 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.194167 kubelet[2763]: E1104 20:05:34.194109 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.194291 kubelet[2763]: E1104 20:05:34.194266 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.194291 kubelet[2763]: W1104 20:05:34.194278 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.194291 kubelet[2763]: E1104 20:05:34.194285 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.194444 kubelet[2763]: E1104 20:05:34.194425 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.194444 kubelet[2763]: W1104 20:05:34.194440 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.194506 kubelet[2763]: E1104 20:05:34.194447 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.197153 kubelet[2763]: E1104 20:05:34.197128 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.197153 kubelet[2763]: W1104 20:05:34.197149 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.197232 kubelet[2763]: E1104 20:05:34.197162 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.197398 kubelet[2763]: E1104 20:05:34.197379 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.197398 kubelet[2763]: W1104 20:05:34.197394 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.197472 kubelet[2763]: E1104 20:05:34.197403 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.198044 kubelet[2763]: E1104 20:05:34.197995 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.198086 kubelet[2763]: W1104 20:05:34.198063 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.198086 kubelet[2763]: E1104 20:05:34.198074 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.198285 kubelet[2763]: E1104 20:05:34.198264 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.198285 kubelet[2763]: W1104 20:05:34.198280 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.198339 kubelet[2763]: E1104 20:05:34.198289 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.198459 kubelet[2763]: E1104 20:05:34.198437 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.198459 kubelet[2763]: W1104 20:05:34.198452 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.198508 kubelet[2763]: E1104 20:05:34.198461 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198615 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.199046 kubelet[2763]: W1104 20:05:34.198625 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198632 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198771 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.199046 kubelet[2763]: W1104 20:05:34.198778 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198784 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198928 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.199046 kubelet[2763]: W1104 20:05:34.198934 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.199046 kubelet[2763]: E1104 20:05:34.198942 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.199328 kubelet[2763]: E1104 20:05:34.199097 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.199328 kubelet[2763]: W1104 20:05:34.199103 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.199328 kubelet[2763]: E1104 20:05:34.199110 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.205268 kubelet[2763]: E1104 20:05:34.205220 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.205268 kubelet[2763]: W1104 20:05:34.205259 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.205382 kubelet[2763]: E1104 20:05:34.205289 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.205382 kubelet[2763]: I1104 20:05:34.205341 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjnx\" (UniqueName: \"kubernetes.io/projected/89d56747-162a-4c55-bf8f-ddfe11dc9e3a-kube-api-access-trjnx\") pod \"csi-node-driver-lgkc6\" (UID: \"89d56747-162a-4c55-bf8f-ddfe11dc9e3a\") " pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:34.208620 kubelet[2763]: E1104 20:05:34.208136 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.208620 kubelet[2763]: W1104 20:05:34.208164 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.208620 kubelet[2763]: E1104 20:05:34.208186 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.208620 kubelet[2763]: I1104 20:05:34.208227 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d56747-162a-4c55-bf8f-ddfe11dc9e3a-kubelet-dir\") pod \"csi-node-driver-lgkc6\" (UID: \"89d56747-162a-4c55-bf8f-ddfe11dc9e3a\") " pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:34.208776 kubelet[2763]: E1104 20:05:34.208680 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.208776 kubelet[2763]: W1104 20:05:34.208695 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.208776 kubelet[2763]: E1104 20:05:34.208709 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.210635 kubelet[2763]: E1104 20:05:34.210198 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.210635 kubelet[2763]: W1104 20:05:34.210214 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.210635 kubelet[2763]: E1104 20:05:34.210223 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.210732 containerd[1607]: time="2025-11-04T20:05:34.210219233Z" level=info msg="connecting to shim 3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec" address="unix:///run/containerd/s/13dae055d159ca7417c27923c41a886b8dfddf0d9fb8d3eb958a1105aca204e9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:34.210773 kubelet[2763]: E1104 20:05:34.210661 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.210773 kubelet[2763]: W1104 20:05:34.210672 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.210773 kubelet[2763]: E1104 20:05:34.210683 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.210839 kubelet[2763]: I1104 20:05:34.210825 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89d56747-162a-4c55-bf8f-ddfe11dc9e3a-varrun\") pod \"csi-node-driver-lgkc6\" (UID: \"89d56747-162a-4c55-bf8f-ddfe11dc9e3a\") " pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:34.210966 kubelet[2763]: E1104 20:05:34.210945 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.210966 kubelet[2763]: W1104 20:05:34.210959 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.211045 kubelet[2763]: E1104 20:05:34.210967 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.211293 kubelet[2763]: E1104 20:05:34.211255 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.211293 kubelet[2763]: W1104 20:05:34.211289 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.211352 kubelet[2763]: E1104 20:05:34.211299 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.211537 kubelet[2763]: E1104 20:05:34.211519 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.211537 kubelet[2763]: W1104 20:05:34.211532 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.211589 kubelet[2763]: E1104 20:05:34.211540 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.211589 kubelet[2763]: I1104 20:05:34.211566 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89d56747-162a-4c55-bf8f-ddfe11dc9e3a-socket-dir\") pod \"csi-node-driver-lgkc6\" (UID: \"89d56747-162a-4c55-bf8f-ddfe11dc9e3a\") " pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:34.212136 kubelet[2763]: E1104 20:05:34.212113 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.212136 kubelet[2763]: W1104 20:05:34.212129 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.212194 kubelet[2763]: E1104 20:05:34.212140 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.212194 kubelet[2763]: I1104 20:05:34.212161 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89d56747-162a-4c55-bf8f-ddfe11dc9e3a-registration-dir\") pod \"csi-node-driver-lgkc6\" (UID: \"89d56747-162a-4c55-bf8f-ddfe11dc9e3a\") " pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:34.213675 kubelet[2763]: E1104 20:05:34.213632 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.213675 kubelet[2763]: W1104 20:05:34.213668 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.213815 kubelet[2763]: E1104 20:05:34.213701 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.214064 kubelet[2763]: E1104 20:05:34.214050 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.214151 kubelet[2763]: W1104 20:05:34.214140 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.214255 kubelet[2763]: E1104 20:05:34.214221 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.214559 kubelet[2763]: E1104 20:05:34.214547 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.214651 kubelet[2763]: W1104 20:05:34.214614 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.214651 kubelet[2763]: E1104 20:05:34.214639 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.215054 kubelet[2763]: E1104 20:05:34.214978 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.215054 kubelet[2763]: W1104 20:05:34.214988 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.215054 kubelet[2763]: E1104 20:05:34.214997 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.216214 kubelet[2763]: E1104 20:05:34.216081 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.216214 kubelet[2763]: W1104 20:05:34.216092 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.216214 kubelet[2763]: E1104 20:05:34.216101 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.216371 kubelet[2763]: E1104 20:05:34.216335 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.216371 kubelet[2763]: W1104 20:05:34.216346 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.216371 kubelet[2763]: E1104 20:05:34.216355 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.225751 kubelet[2763]: E1104 20:05:34.225735 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:34.226946 containerd[1607]: time="2025-11-04T20:05:34.226889847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jh27,Uid:ef10a5d9-2eb8-402b-9364-7d6a1da36750,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:34.249249 systemd[1]: Started cri-containerd-3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec.scope - libcontainer container 3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec. Nov 4 20:05:34.261482 containerd[1607]: time="2025-11-04T20:05:34.261101144Z" level=info msg="connecting to shim 147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b" address="unix:///run/containerd/s/e90d05c6c24eb58b0b04d471936196bb8806f94745da60e3cc89fac3e8463b5b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:34.290200 systemd[1]: Started cri-containerd-147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b.scope - libcontainer container 147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b. Nov 4 20:05:34.301183 containerd[1607]: time="2025-11-04T20:05:34.301126861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5f75f888-wqwtb,Uid:2add14e4-53a6-4ab9-b242-346c7ba94771,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec\"" Nov 4 20:05:34.302073 kubelet[2763]: E1104 20:05:34.302037 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:34.307037 containerd[1607]: time="2025-11-04T20:05:34.304810052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 20:05:34.312989 kubelet[2763]: E1104 20:05:34.312897 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.312989 kubelet[2763]: W1104 20:05:34.312968 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.312989 kubelet[2763]: E1104 20:05:34.312986 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.313299 kubelet[2763]: E1104 20:05:34.313264 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.313377 kubelet[2763]: W1104 20:05:34.313308 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.313377 kubelet[2763]: E1104 20:05:34.313317 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.314075 kubelet[2763]: E1104 20:05:34.313579 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.314075 kubelet[2763]: W1104 20:05:34.313591 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.314075 kubelet[2763]: E1104 20:05:34.313599 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.314075 kubelet[2763]: E1104 20:05:34.313819 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.314075 kubelet[2763]: W1104 20:05:34.313826 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.314075 kubelet[2763]: E1104 20:05:34.313834 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.314225 kubelet[2763]: E1104 20:05:34.314192 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.314225 kubelet[2763]: W1104 20:05:34.314213 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.314266 kubelet[2763]: E1104 20:05:34.314237 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.314529 kubelet[2763]: E1104 20:05:34.314501 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.314529 kubelet[2763]: W1104 20:05:34.314514 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.314529 kubelet[2763]: E1104 20:05:34.314523 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.314758 kubelet[2763]: E1104 20:05:34.314740 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.314758 kubelet[2763]: W1104 20:05:34.314751 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.314852 kubelet[2763]: E1104 20:05:34.314759 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.315072 kubelet[2763]: E1104 20:05:34.315044 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.315072 kubelet[2763]: W1104 20:05:34.315064 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.315072 kubelet[2763]: E1104 20:05:34.315073 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315324 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316054 kubelet[2763]: W1104 20:05:34.315333 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315343 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315565 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316054 kubelet[2763]: W1104 20:05:34.315572 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315581 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315768 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316054 kubelet[2763]: W1104 20:05:34.315775 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315782 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316054 kubelet[2763]: E1104 20:05:34.315991 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316267 kubelet[2763]: W1104 20:05:34.315998 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316267 kubelet[2763]: E1104 20:05:34.316006 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316313 kubelet[2763]: E1104 20:05:34.316268 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316313 kubelet[2763]: W1104 20:05:34.316278 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316313 kubelet[2763]: E1104 20:05:34.316286 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316637 kubelet[2763]: E1104 20:05:34.316616 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316690 kubelet[2763]: W1104 20:05:34.316660 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.316690 kubelet[2763]: E1104 20:05:34.316671 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.316989 kubelet[2763]: E1104 20:05:34.316973 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.316989 kubelet[2763]: W1104 20:05:34.316984 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.317076 kubelet[2763]: E1104 20:05:34.316994 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.317289 kubelet[2763]: E1104 20:05:34.317271 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.317289 kubelet[2763]: W1104 20:05:34.317283 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.317425 kubelet[2763]: E1104 20:05:34.317292 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.317594 kubelet[2763]: E1104 20:05:34.317579 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.317594 kubelet[2763]: W1104 20:05:34.317590 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.317672 kubelet[2763]: E1104 20:05:34.317599 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.321826 kubelet[2763]: E1104 20:05:34.321787 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.321826 kubelet[2763]: W1104 20:05:34.321803 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.321826 kubelet[2763]: E1104 20:05:34.321814 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.322636 kubelet[2763]: E1104 20:05:34.322555 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.322636 kubelet[2763]: W1104 20:05:34.322568 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.322636 kubelet[2763]: E1104 20:05:34.322579 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.322962 kubelet[2763]: E1104 20:05:34.322939 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.323070 kubelet[2763]: W1104 20:05:34.323047 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.323184 kubelet[2763]: E1104 20:05:34.323161 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.323504 kubelet[2763]: E1104 20:05:34.323490 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.323685 kubelet[2763]: W1104 20:05:34.323566 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.323685 kubelet[2763]: E1104 20:05:34.323581 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.323814 kubelet[2763]: E1104 20:05:34.323802 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.323872 kubelet[2763]: W1104 20:05:34.323861 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.323958 kubelet[2763]: E1104 20:05:34.323943 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.324276 kubelet[2763]: E1104 20:05:34.324252 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.324276 kubelet[2763]: W1104 20:05:34.324268 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.324276 kubelet[2763]: E1104 20:05:34.324278 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.324577 kubelet[2763]: E1104 20:05:34.324557 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.324577 kubelet[2763]: W1104 20:05:34.324569 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.324577 kubelet[2763]: E1104 20:05:34.324578 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.324807 kubelet[2763]: E1104 20:05:34.324787 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.324807 kubelet[2763]: W1104 20:05:34.324799 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.324807 kubelet[2763]: E1104 20:05:34.324806 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.325893 kubelet[2763]: E1104 20:05:34.325875 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:34.325893 kubelet[2763]: W1104 20:05:34.325888 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:34.325979 kubelet[2763]: E1104 20:05:34.325898 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:34.331524 containerd[1607]: time="2025-11-04T20:05:34.331481040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5jh27,Uid:ef10a5d9-2eb8-402b-9364-7d6a1da36750,Namespace:calico-system,Attempt:0,} returns sandbox id \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\"" Nov 4 20:05:34.332060 kubelet[2763]: E1104 20:05:34.332008 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:35.742921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453100112.mount: Deactivated successfully. Nov 4 20:05:36.498606 kubelet[2763]: E1104 20:05:36.498539 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:36.519814 containerd[1607]: time="2025-11-04T20:05:36.519719113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:36.520656 containerd[1607]: time="2025-11-04T20:05:36.520575335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 4 20:05:36.522223 containerd[1607]: time="2025-11-04T20:05:36.522168166Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:36.524617 containerd[1607]: time="2025-11-04T20:05:36.524541048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:36.525101 containerd[1607]: time="2025-11-04T20:05:36.525061843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.220164988s" Nov 4 20:05:36.525169 containerd[1607]: time="2025-11-04T20:05:36.525105665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 20:05:36.526947 containerd[1607]: time="2025-11-04T20:05:36.526677808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 20:05:36.546105 containerd[1607]: time="2025-11-04T20:05:36.546048915Z" level=info msg="CreateContainer within sandbox \"3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 20:05:36.555548 containerd[1607]: time="2025-11-04T20:05:36.555483824Z" level=info msg="Container c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:36.565165 containerd[1607]: time="2025-11-04T20:05:36.565095335Z" level=info msg="CreateContainer within sandbox \"3fa1ed9035b60e835b6ed2eb586189708dd1169b0abd534b47beddd164bd49ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a\"" Nov 4 20:05:36.565826 containerd[1607]: time="2025-11-04T20:05:36.565786027Z" level=info msg="StartContainer for \"c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a\"" Nov 4 20:05:36.567176 containerd[1607]: time="2025-11-04T20:05:36.567144079Z" level=info msg="connecting to shim c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a" address="unix:///run/containerd/s/13dae055d159ca7417c27923c41a886b8dfddf0d9fb8d3eb958a1105aca204e9" protocol=ttrpc version=3 Nov 4 20:05:36.592262 systemd[1]: Started cri-containerd-c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a.scope - libcontainer container c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a. Nov 4 20:05:36.653672 containerd[1607]: time="2025-11-04T20:05:36.653629165Z" level=info msg="StartContainer for \"c13372b31b02629436aa17d90936d02bc900a42b7cc30b41ddba2e8d23d8392a\" returns successfully" Nov 4 20:05:37.561510 kubelet[2763]: E1104 20:05:37.561475 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:37.571618 kubelet[2763]: I1104 20:05:37.571472 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c5f75f888-wqwtb" podStartSLOduration=2.349187282 podStartE2EDuration="4.571453041s" podCreationTimestamp="2025-11-04 20:05:33 +0000 UTC" firstStartedPulling="2025-11-04 20:05:34.303727086 +0000 UTC m=+18.898424669" lastFinishedPulling="2025-11-04 20:05:36.525992845 +0000 UTC m=+21.120690428" observedRunningTime="2025-11-04 20:05:37.571151336 +0000 UTC m=+22.165848909" watchObservedRunningTime="2025-11-04 20:05:37.571453041 +0000 UTC m=+22.166150624" Nov 4 20:05:37.622560 kubelet[2763]: E1104 20:05:37.622514 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.622560 kubelet[2763]: W1104 20:05:37.622546 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.622560 kubelet[2763]: E1104 20:05:37.622574 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.622824 kubelet[2763]: E1104 20:05:37.622804 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.622824 kubelet[2763]: W1104 20:05:37.622813 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.622824 kubelet[2763]: E1104 20:05:37.622823 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.623028 kubelet[2763]: E1104 20:05:37.623005 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.623087 kubelet[2763]: W1104 20:05:37.623040 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.623087 kubelet[2763]: E1104 20:05:37.623051 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.623316 kubelet[2763]: E1104 20:05:37.623285 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.623316 kubelet[2763]: W1104 20:05:37.623299 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.623316 kubelet[2763]: E1104 20:05:37.623309 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.623497 kubelet[2763]: E1104 20:05:37.623476 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.623497 kubelet[2763]: W1104 20:05:37.623487 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.623574 kubelet[2763]: E1104 20:05:37.623497 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.623673 kubelet[2763]: E1104 20:05:37.623657 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.623673 kubelet[2763]: W1104 20:05:37.623666 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.623755 kubelet[2763]: E1104 20:05:37.623676 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.623856 kubelet[2763]: E1104 20:05:37.623836 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.623856 kubelet[2763]: W1104 20:05:37.623847 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.623856 kubelet[2763]: E1104 20:05:37.623856 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.624123 kubelet[2763]: E1104 20:05:37.624090 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.624123 kubelet[2763]: W1104 20:05:37.624104 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.624123 kubelet[2763]: E1104 20:05:37.624115 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.624357 kubelet[2763]: E1104 20:05:37.624302 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.624357 kubelet[2763]: W1104 20:05:37.624312 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.624357 kubelet[2763]: E1104 20:05:37.624321 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.624493 kubelet[2763]: E1104 20:05:37.624475 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.624493 kubelet[2763]: W1104 20:05:37.624489 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.624547 kubelet[2763]: E1104 20:05:37.624499 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.624686 kubelet[2763]: E1104 20:05:37.624668 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.624686 kubelet[2763]: W1104 20:05:37.624683 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.624686 kubelet[2763]: E1104 20:05:37.624694 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.624871 kubelet[2763]: E1104 20:05:37.624855 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.624871 kubelet[2763]: W1104 20:05:37.624867 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.624931 kubelet[2763]: E1104 20:05:37.624878 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.625122 kubelet[2763]: E1104 20:05:37.625104 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.625122 kubelet[2763]: W1104 20:05:37.625118 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.625196 kubelet[2763]: E1104 20:05:37.625130 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.625334 kubelet[2763]: E1104 20:05:37.625317 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.625334 kubelet[2763]: W1104 20:05:37.625331 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.625384 kubelet[2763]: E1104 20:05:37.625342 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.625534 kubelet[2763]: E1104 20:05:37.625516 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.625534 kubelet[2763]: W1104 20:05:37.625530 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.625588 kubelet[2763]: E1104 20:05:37.625542 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.641444 kubelet[2763]: E1104 20:05:37.641410 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.641444 kubelet[2763]: W1104 20:05:37.641436 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.641523 kubelet[2763]: E1104 20:05:37.641462 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.641698 kubelet[2763]: E1104 20:05:37.641679 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.641728 kubelet[2763]: W1104 20:05:37.641692 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.641728 kubelet[2763]: E1104 20:05:37.641714 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.641980 kubelet[2763]: E1104 20:05:37.641961 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.641980 kubelet[2763]: W1104 20:05:37.641974 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.642042 kubelet[2763]: E1104 20:05:37.641984 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.642241 kubelet[2763]: E1104 20:05:37.642226 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.642241 kubelet[2763]: W1104 20:05:37.642238 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.642297 kubelet[2763]: E1104 20:05:37.642248 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.642425 kubelet[2763]: E1104 20:05:37.642410 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.642425 kubelet[2763]: W1104 20:05:37.642420 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.642490 kubelet[2763]: E1104 20:05:37.642429 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.642612 kubelet[2763]: E1104 20:05:37.642597 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.642612 kubelet[2763]: W1104 20:05:37.642609 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.642662 kubelet[2763]: E1104 20:05:37.642619 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.642814 kubelet[2763]: E1104 20:05:37.642799 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.642814 kubelet[2763]: W1104 20:05:37.642810 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.642863 kubelet[2763]: E1104 20:05:37.642821 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.643161 kubelet[2763]: E1104 20:05:37.643138 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.643161 kubelet[2763]: W1104 20:05:37.643157 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.643221 kubelet[2763]: E1104 20:05:37.643170 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.643381 kubelet[2763]: E1104 20:05:37.643365 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.643381 kubelet[2763]: W1104 20:05:37.643378 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.643434 kubelet[2763]: E1104 20:05:37.643388 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.643597 kubelet[2763]: E1104 20:05:37.643570 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.643597 kubelet[2763]: W1104 20:05:37.643582 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.643597 kubelet[2763]: E1104 20:05:37.643592 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.643823 kubelet[2763]: E1104 20:05:37.643810 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.643823 kubelet[2763]: W1104 20:05:37.643820 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.643875 kubelet[2763]: E1104 20:05:37.643830 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.644158 kubelet[2763]: E1104 20:05:37.644141 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.644158 kubelet[2763]: W1104 20:05:37.644155 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.644208 kubelet[2763]: E1104 20:05:37.644165 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.644381 kubelet[2763]: E1104 20:05:37.644368 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.644402 kubelet[2763]: W1104 20:05:37.644379 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.644402 kubelet[2763]: E1104 20:05:37.644388 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.644585 kubelet[2763]: E1104 20:05:37.644573 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.644585 kubelet[2763]: W1104 20:05:37.644583 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.644625 kubelet[2763]: E1104 20:05:37.644592 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.644775 kubelet[2763]: E1104 20:05:37.644763 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.644796 kubelet[2763]: W1104 20:05:37.644774 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.644796 kubelet[2763]: E1104 20:05:37.644785 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.645003 kubelet[2763]: E1104 20:05:37.644991 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.645003 kubelet[2763]: W1104 20:05:37.645001 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.645071 kubelet[2763]: E1104 20:05:37.645030 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.645313 kubelet[2763]: E1104 20:05:37.645298 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.645340 kubelet[2763]: W1104 20:05:37.645312 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.645340 kubelet[2763]: E1104 20:05:37.645323 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:37.645513 kubelet[2763]: E1104 20:05:37.645500 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 20:05:37.645513 kubelet[2763]: W1104 20:05:37.645511 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 20:05:37.645556 kubelet[2763]: E1104 20:05:37.645520 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 20:05:38.386492 containerd[1607]: time="2025-11-04T20:05:38.386431469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:38.387217 containerd[1607]: time="2025-11-04T20:05:38.387180901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:38.388228 containerd[1607]: time="2025-11-04T20:05:38.388194930Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:38.390085 containerd[1607]: time="2025-11-04T20:05:38.390058208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:38.390599 containerd[1607]: time="2025-11-04T20:05:38.390546602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.863835743s" Nov 4 20:05:38.390599 containerd[1607]: time="2025-11-04T20:05:38.390594933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 20:05:38.394399 containerd[1607]: time="2025-11-04T20:05:38.394363568Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 20:05:38.402792 containerd[1607]: time="2025-11-04T20:05:38.402736513Z" level=info msg="Container fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:38.411292 containerd[1607]: time="2025-11-04T20:05:38.411245273Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648\"" Nov 4 20:05:38.411926 containerd[1607]: time="2025-11-04T20:05:38.411874141Z" level=info msg="StartContainer for \"fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648\"" Nov 4 20:05:38.413347 containerd[1607]: time="2025-11-04T20:05:38.413320288Z" level=info msg="connecting to shim fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648" address="unix:///run/containerd/s/e90d05c6c24eb58b0b04d471936196bb8806f94745da60e3cc89fac3e8463b5b" protocol=ttrpc version=3 Nov 4 20:05:38.439348 systemd[1]: Started cri-containerd-fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648.scope - libcontainer container fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648. Nov 4 20:05:38.486465 containerd[1607]: time="2025-11-04T20:05:38.486410424Z" level=info msg="StartContainer for \"fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648\" returns successfully" Nov 4 20:05:38.498498 kubelet[2763]: E1104 20:05:38.498424 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:38.505233 systemd[1]: cri-containerd-fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648.scope: Deactivated successfully. Nov 4 20:05:38.505791 systemd[1]: cri-containerd-fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648.scope: Consumed 43ms CPU time, 6.4M memory peak, 4.6M written to disk. Nov 4 20:05:38.507355 containerd[1607]: time="2025-11-04T20:05:38.507306325Z" level=info msg="received exit event container_id:\"fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648\" id:\"fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648\" pid:3478 exited_at:{seconds:1762286738 nanos:506736228}" Nov 4 20:05:38.538070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc95f35c38dab4aeeb9095df4b5587d85b4cf78fc72ec2e69ee0b0f4b0fba648-rootfs.mount: Deactivated successfully. Nov 4 20:05:38.565038 kubelet[2763]: I1104 20:05:38.564970 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 20:05:38.565532 kubelet[2763]: E1104 20:05:38.565437 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:38.565688 kubelet[2763]: E1104 20:05:38.565582 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:39.568494 kubelet[2763]: E1104 20:05:39.568428 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:39.569353 containerd[1607]: time="2025-11-04T20:05:39.569057884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 20:05:40.498764 kubelet[2763]: E1104 20:05:40.498709 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:42.498429 kubelet[2763]: E1104 20:05:42.498355 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:43.732323 containerd[1607]: time="2025-11-04T20:05:43.732263685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:43.733490 containerd[1607]: time="2025-11-04T20:05:43.733454455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 20:05:43.734569 containerd[1607]: time="2025-11-04T20:05:43.734535399Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:43.736448 containerd[1607]: time="2025-11-04T20:05:43.736406885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:43.736934 containerd[1607]: time="2025-11-04T20:05:43.736890300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.167793433s" Nov 4 20:05:43.736965 containerd[1607]: time="2025-11-04T20:05:43.736932189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 20:05:43.740444 containerd[1607]: time="2025-11-04T20:05:43.740391017Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 20:05:43.748597 containerd[1607]: time="2025-11-04T20:05:43.748556201Z" level=info msg="Container 9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:43.758891 containerd[1607]: time="2025-11-04T20:05:43.758847497Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8\"" Nov 4 20:05:43.759338 containerd[1607]: time="2025-11-04T20:05:43.759301517Z" level=info msg="StartContainer for \"9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8\"" Nov 4 20:05:43.764307 containerd[1607]: time="2025-11-04T20:05:43.764268530Z" level=info msg="connecting to shim 9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8" address="unix:///run/containerd/s/e90d05c6c24eb58b0b04d471936196bb8806f94745da60e3cc89fac3e8463b5b" protocol=ttrpc version=3 Nov 4 20:05:43.790201 systemd[1]: Started cri-containerd-9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8.scope - libcontainer container 9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8. Nov 4 20:05:43.838740 containerd[1607]: time="2025-11-04T20:05:43.838697139Z" level=info msg="StartContainer for \"9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8\" returns successfully" Nov 4 20:05:44.497978 kubelet[2763]: E1104 20:05:44.497914 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:44.580124 kubelet[2763]: E1104 20:05:44.580079 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:45.235357 systemd[1]: cri-containerd-9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8.scope: Deactivated successfully. Nov 4 20:05:45.235704 systemd[1]: cri-containerd-9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8.scope: Consumed 731ms CPU time, 181.2M memory peak, 3.6M read from disk, 171.3M written to disk. Nov 4 20:05:45.262634 containerd[1607]: time="2025-11-04T20:05:45.262565499Z" level=info msg="received exit event container_id:\"9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8\" id:\"9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8\" pid:3539 exited_at:{seconds:1762286745 nanos:235003687}" Nov 4 20:05:45.286318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ef4849eba2c9abbd373ff2c60e24d0c89dd0ced965d3f0c8042ff40e7be83a8-rootfs.mount: Deactivated successfully. Nov 4 20:05:45.316380 kubelet[2763]: I1104 20:05:45.315760 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 20:05:45.436269 systemd[1]: Created slice kubepods-besteffort-podc04ecbad_d5c2_43d0_b16a_235b2a29a278.slice - libcontainer container kubepods-besteffort-podc04ecbad_d5c2_43d0_b16a_235b2a29a278.slice. Nov 4 20:05:45.443131 systemd[1]: Created slice kubepods-burstable-poda1c9f891_3697_476b_83d7_d7d21e81d397.slice - libcontainer container kubepods-burstable-poda1c9f891_3697_476b_83d7_d7d21e81d397.slice. Nov 4 20:05:45.452185 systemd[1]: Created slice kubepods-besteffort-poda8754e25_0820_405c_8ad6_8e109ea21a48.slice - libcontainer container kubepods-besteffort-poda8754e25_0820_405c_8ad6_8e109ea21a48.slice. Nov 4 20:05:45.461084 systemd[1]: Created slice kubepods-besteffort-pod4d779447_aab7_4044_9468_fe0588e362f2.slice - libcontainer container kubepods-besteffort-pod4d779447_aab7_4044_9468_fe0588e362f2.slice. Nov 4 20:05:45.468342 systemd[1]: Created slice kubepods-besteffort-pod45ba7d0b_5883_4d92_9d1d_2bfad2cab22b.slice - libcontainer container kubepods-besteffort-pod45ba7d0b_5883_4d92_9d1d_2bfad2cab22b.slice. Nov 4 20:05:45.477257 systemd[1]: Created slice kubepods-burstable-pod33f5b0fe_ea26_4e97_a06f_e2a58710cc60.slice - libcontainer container kubepods-burstable-pod33f5b0fe_ea26_4e97_a06f_e2a58710cc60.slice. Nov 4 20:05:45.482106 systemd[1]: Created slice kubepods-besteffort-pod7a09dd2b_bdf2_470b_8c32_8b78634b3660.slice - libcontainer container kubepods-besteffort-pod7a09dd2b_bdf2_470b_8c32_8b78634b3660.slice. Nov 4 20:05:45.493919 kubelet[2763]: I1104 20:05:45.492838 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8754e25-0820-405c-8ad6-8e109ea21a48-calico-apiserver-certs\") pod \"calico-apiserver-5bf8445db8-2sm8f\" (UID: \"a8754e25-0820-405c-8ad6-8e109ea21a48\") " pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" Nov 4 20:05:45.493919 kubelet[2763]: I1104 20:05:45.492884 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-backend-key-pair\") pod \"whisker-77d64645-2lxwj\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " pod="calico-system/whisker-77d64645-2lxwj" Nov 4 20:05:45.493919 kubelet[2763]: I1104 20:05:45.492930 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgnm\" (UniqueName: \"kubernetes.io/projected/7a09dd2b-bdf2-470b-8c32-8b78634b3660-kube-api-access-ghgnm\") pod \"whisker-77d64645-2lxwj\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " pod="calico-system/whisker-77d64645-2lxwj" Nov 4 20:05:45.493919 kubelet[2763]: I1104 20:05:45.492947 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d779447-aab7-4044-9468-fe0588e362f2-config\") pod \"goldmane-666569f655-z9g8r\" (UID: \"4d779447-aab7-4044-9468-fe0588e362f2\") " pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.493919 kubelet[2763]: I1104 20:05:45.492998 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d779447-aab7-4044-9468-fe0588e362f2-goldmane-ca-bundle\") pod \"goldmane-666569f655-z9g8r\" (UID: \"4d779447-aab7-4044-9468-fe0588e362f2\") " pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.494365 kubelet[2763]: I1104 20:05:45.493123 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28t4h\" (UniqueName: \"kubernetes.io/projected/a1c9f891-3697-476b-83d7-d7d21e81d397-kube-api-access-28t4h\") pod \"coredns-674b8bbfcf-85sqf\" (UID: \"a1c9f891-3697-476b-83d7-d7d21e81d397\") " pod="kube-system/coredns-674b8bbfcf-85sqf" Nov 4 20:05:45.494365 kubelet[2763]: I1104 20:05:45.493180 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c04ecbad-d5c2-43d0-b16a-235b2a29a278-calico-apiserver-certs\") pod \"calico-apiserver-5bf8445db8-vc5nj\" (UID: \"c04ecbad-d5c2-43d0-b16a-235b2a29a278\") " pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" Nov 4 20:05:45.494365 kubelet[2763]: I1104 20:05:45.493200 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c9f891-3697-476b-83d7-d7d21e81d397-config-volume\") pod \"coredns-674b8bbfcf-85sqf\" (UID: \"a1c9f891-3697-476b-83d7-d7d21e81d397\") " pod="kube-system/coredns-674b8bbfcf-85sqf" Nov 4 20:05:45.494365 kubelet[2763]: I1104 20:05:45.493249 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52rm\" (UniqueName: \"kubernetes.io/projected/c04ecbad-d5c2-43d0-b16a-235b2a29a278-kube-api-access-l52rm\") pod \"calico-apiserver-5bf8445db8-vc5nj\" (UID: \"c04ecbad-d5c2-43d0-b16a-235b2a29a278\") " pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" Nov 4 20:05:45.494365 kubelet[2763]: I1104 20:05:45.494043 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j78q\" (UniqueName: \"kubernetes.io/projected/45ba7d0b-5883-4d92-9d1d-2bfad2cab22b-kube-api-access-9j78q\") pod \"calico-kube-controllers-5b4cd748b4-xzkvr\" (UID: \"45ba7d0b-5883-4d92-9d1d-2bfad2cab22b\") " pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" Nov 4 20:05:45.494528 kubelet[2763]: I1104 20:05:45.494070 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33f5b0fe-ea26-4e97-a06f-e2a58710cc60-config-volume\") pod \"coredns-674b8bbfcf-48stb\" (UID: \"33f5b0fe-ea26-4e97-a06f-e2a58710cc60\") " pod="kube-system/coredns-674b8bbfcf-48stb" Nov 4 20:05:45.494528 kubelet[2763]: I1104 20:05:45.494085 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddk9\" (UniqueName: \"kubernetes.io/projected/33f5b0fe-ea26-4e97-a06f-e2a58710cc60-kube-api-access-dddk9\") pod \"coredns-674b8bbfcf-48stb\" (UID: \"33f5b0fe-ea26-4e97-a06f-e2a58710cc60\") " pod="kube-system/coredns-674b8bbfcf-48stb" Nov 4 20:05:45.494528 kubelet[2763]: I1104 20:05:45.494099 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55sf9\" (UniqueName: \"kubernetes.io/projected/4d779447-aab7-4044-9468-fe0588e362f2-kube-api-access-55sf9\") pod \"goldmane-666569f655-z9g8r\" (UID: \"4d779447-aab7-4044-9468-fe0588e362f2\") " pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.494528 kubelet[2763]: I1104 20:05:45.494112 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-ca-bundle\") pod \"whisker-77d64645-2lxwj\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " pod="calico-system/whisker-77d64645-2lxwj" Nov 4 20:05:45.494528 kubelet[2763]: I1104 20:05:45.494127 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ba7d0b-5883-4d92-9d1d-2bfad2cab22b-tigera-ca-bundle\") pod \"calico-kube-controllers-5b4cd748b4-xzkvr\" (UID: \"45ba7d0b-5883-4d92-9d1d-2bfad2cab22b\") " pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" Nov 4 20:05:45.494679 kubelet[2763]: I1104 20:05:45.494147 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5bck\" (UniqueName: \"kubernetes.io/projected/a8754e25-0820-405c-8ad6-8e109ea21a48-kube-api-access-w5bck\") pod \"calico-apiserver-5bf8445db8-2sm8f\" (UID: \"a8754e25-0820-405c-8ad6-8e109ea21a48\") " pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" Nov 4 20:05:45.494679 kubelet[2763]: I1104 20:05:45.494161 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d779447-aab7-4044-9468-fe0588e362f2-goldmane-key-pair\") pod \"goldmane-666569f655-z9g8r\" (UID: \"4d779447-aab7-4044-9468-fe0588e362f2\") " pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.585881 kubelet[2763]: E1104 20:05:45.585677 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:45.586644 containerd[1607]: time="2025-11-04T20:05:45.586480646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 20:05:45.740333 containerd[1607]: time="2025-11-04T20:05:45.740286603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-vc5nj,Uid:c04ecbad-d5c2-43d0-b16a-235b2a29a278,Namespace:calico-apiserver,Attempt:0,}" Nov 4 20:05:45.748714 kubelet[2763]: E1104 20:05:45.748596 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:45.750266 containerd[1607]: time="2025-11-04T20:05:45.750226224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85sqf,Uid:a1c9f891-3697-476b-83d7-d7d21e81d397,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:45.758431 containerd[1607]: time="2025-11-04T20:05:45.758393483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-2sm8f,Uid:a8754e25-0820-405c-8ad6-8e109ea21a48,Namespace:calico-apiserver,Attempt:0,}" Nov 4 20:05:45.770231 containerd[1607]: time="2025-11-04T20:05:45.770180154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z9g8r,Uid:4d779447-aab7-4044-9468-fe0588e362f2,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:45.778482 containerd[1607]: time="2025-11-04T20:05:45.778332235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4cd748b4-xzkvr,Uid:45ba7d0b-5883-4d92-9d1d-2bfad2cab22b,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:45.782999 kubelet[2763]: E1104 20:05:45.781290 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:45.784258 containerd[1607]: time="2025-11-04T20:05:45.784179318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48stb,Uid:33f5b0fe-ea26-4e97-a06f-e2a58710cc60,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:45.786165 containerd[1607]: time="2025-11-04T20:05:45.786133990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d64645-2lxwj,Uid:7a09dd2b-bdf2-470b-8c32-8b78634b3660,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:45.922470 containerd[1607]: time="2025-11-04T20:05:45.922414787Z" level=error msg="Failed to destroy network for sandbox \"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.924653 containerd[1607]: time="2025-11-04T20:05:45.924618306Z" level=error msg="Failed to destroy network for sandbox \"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.925691 containerd[1607]: time="2025-11-04T20:05:45.925642284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4cd748b4-xzkvr,Uid:45ba7d0b-5883-4d92-9d1d-2bfad2cab22b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.926171 kubelet[2763]: E1104 20:05:45.926117 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.926239 kubelet[2763]: E1104 20:05:45.926211 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" Nov 4 20:05:45.926239 kubelet[2763]: E1104 20:05:45.926235 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" Nov 4 20:05:45.926380 kubelet[2763]: E1104 20:05:45.926295 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b4cd748b4-xzkvr_calico-system(45ba7d0b-5883-4d92-9d1d-2bfad2cab22b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b4cd748b4-xzkvr_calico-system(45ba7d0b-5883-4d92-9d1d-2bfad2cab22b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a52ba08904961292738d933eaec906570ec526751406f52e7149fa5131e72d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:05:45.927027 containerd[1607]: time="2025-11-04T20:05:45.926977225Z" level=error msg="Failed to destroy network for sandbox \"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.928710 containerd[1607]: time="2025-11-04T20:05:45.928677981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d64645-2lxwj,Uid:7a09dd2b-bdf2-470b-8c32-8b78634b3660,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.929030 kubelet[2763]: E1104 20:05:45.928992 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.929116 kubelet[2763]: E1104 20:05:45.929097 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d64645-2lxwj" Nov 4 20:05:45.929171 kubelet[2763]: E1104 20:05:45.929119 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d64645-2lxwj" Nov 4 20:05:45.929205 kubelet[2763]: E1104 20:05:45.929176 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77d64645-2lxwj_calico-system(7a09dd2b-bdf2-470b-8c32-8b78634b3660)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77d64645-2lxwj_calico-system(7a09dd2b-bdf2-470b-8c32-8b78634b3660)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28d868543391d8ec2246d3cba2a72f1f6fd50f2b996b9e9cf427b4311e7e3a03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77d64645-2lxwj" podUID="7a09dd2b-bdf2-470b-8c32-8b78634b3660" Nov 4 20:05:45.930837 containerd[1607]: time="2025-11-04T20:05:45.930794806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-2sm8f,Uid:a8754e25-0820-405c-8ad6-8e109ea21a48,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.930956 kubelet[2763]: E1104 20:05:45.930929 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.931005 kubelet[2763]: E1104 20:05:45.930962 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" Nov 4 20:05:45.931005 kubelet[2763]: E1104 20:05:45.930978 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" Nov 4 20:05:45.931093 kubelet[2763]: E1104 20:05:45.931032 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf8445db8-2sm8f_calico-apiserver(a8754e25-0820-405c-8ad6-8e109ea21a48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf8445db8-2sm8f_calico-apiserver(a8754e25-0820-405c-8ad6-8e109ea21a48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9745386f9887d67fc5fd4bab2f1d140ba09ea51db6ef9cbb83516287503a26e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:05:45.938000 containerd[1607]: time="2025-11-04T20:05:45.937953177Z" level=error msg="Failed to destroy network for sandbox \"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.940585 containerd[1607]: time="2025-11-04T20:05:45.940528410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-vc5nj,Uid:c04ecbad-d5c2-43d0-b16a-235b2a29a278,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.941226 kubelet[2763]: E1104 20:05:45.941178 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.941271 kubelet[2763]: E1104 20:05:45.941258 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" Nov 4 20:05:45.941324 kubelet[2763]: E1104 20:05:45.941286 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" Nov 4 20:05:45.941553 kubelet[2763]: E1104 20:05:45.941448 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf8445db8-vc5nj_calico-apiserver(c04ecbad-d5c2-43d0-b16a-235b2a29a278)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf8445db8-vc5nj_calico-apiserver(c04ecbad-d5c2-43d0-b16a-235b2a29a278)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d53438a5bb64d359d2856e5368ec2cf2ba593f69cfd00c90ab2d4a94708897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:05:45.944274 containerd[1607]: time="2025-11-04T20:05:45.944224885Z" level=error msg="Failed to destroy network for sandbox \"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.944780 containerd[1607]: time="2025-11-04T20:05:45.944739419Z" level=error msg="Failed to destroy network for sandbox \"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.946846 containerd[1607]: time="2025-11-04T20:05:45.946642344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z9g8r,Uid:4d779447-aab7-4044-9468-fe0588e362f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.946939 kubelet[2763]: E1104 20:05:45.946862 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.946939 kubelet[2763]: E1104 20:05:45.946928 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.946997 kubelet[2763]: E1104 20:05:45.946947 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z9g8r" Nov 4 20:05:45.947051 kubelet[2763]: E1104 20:05:45.946992 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-z9g8r_calico-system(4d779447-aab7-4044-9468-fe0588e362f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-z9g8r_calico-system(4d779447-aab7-4044-9468-fe0588e362f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51938bd2d292300bf681abac456688bbe3d886c2a705ba59a2fe790682eef1b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:05:45.949271 containerd[1607]: time="2025-11-04T20:05:45.949243747Z" level=error msg="Failed to destroy network for sandbox \"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.949578 containerd[1607]: time="2025-11-04T20:05:45.949534882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85sqf,Uid:a1c9f891-3697-476b-83d7-d7d21e81d397,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.949716 kubelet[2763]: E1104 20:05:45.949688 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.949767 kubelet[2763]: E1104 20:05:45.949727 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-85sqf" Nov 4 20:05:45.949767 kubelet[2763]: E1104 20:05:45.949744 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-85sqf" Nov 4 20:05:45.949831 kubelet[2763]: E1104 20:05:45.949784 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-85sqf_kube-system(a1c9f891-3697-476b-83d7-d7d21e81d397)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-85sqf_kube-system(a1c9f891-3697-476b-83d7-d7d21e81d397)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bce1630f9d644e8ed4a5e98ba60640642b31a96f4f8b813e5706feaa4d1d347f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-85sqf" podUID="a1c9f891-3697-476b-83d7-d7d21e81d397" Nov 4 20:05:45.951476 containerd[1607]: time="2025-11-04T20:05:45.951400036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48stb,Uid:33f5b0fe-ea26-4e97-a06f-e2a58710cc60,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.951621 kubelet[2763]: E1104 20:05:45.951549 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:45.951621 kubelet[2763]: E1104 20:05:45.951589 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-48stb" Nov 4 20:05:45.951621 kubelet[2763]: E1104 20:05:45.951605 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-48stb" Nov 4 20:05:45.951724 kubelet[2763]: E1104 20:05:45.951648 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-48stb_kube-system(33f5b0fe-ea26-4e97-a06f-e2a58710cc60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-48stb_kube-system(33f5b0fe-ea26-4e97-a06f-e2a58710cc60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e195c9a5d23a594242ef81dc5151a50a47ca17e6cb5fa665a86f97146ae4ab4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-48stb" podUID="33f5b0fe-ea26-4e97-a06f-e2a58710cc60" Nov 4 20:05:46.504139 systemd[1]: Created slice kubepods-besteffort-pod89d56747_162a_4c55_bf8f_ddfe11dc9e3a.slice - libcontainer container kubepods-besteffort-pod89d56747_162a_4c55_bf8f_ddfe11dc9e3a.slice. Nov 4 20:05:46.506858 containerd[1607]: time="2025-11-04T20:05:46.506818637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgkc6,Uid:89d56747-162a-4c55-bf8f-ddfe11dc9e3a,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:46.574644 containerd[1607]: time="2025-11-04T20:05:46.574563855Z" level=error msg="Failed to destroy network for sandbox \"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:46.577138 systemd[1]: run-netns-cni\x2dc6b05699\x2d4604\x2da610\x2d567a\x2db02df62e8884.mount: Deactivated successfully. Nov 4 20:05:46.579041 containerd[1607]: time="2025-11-04T20:05:46.578972544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgkc6,Uid:89d56747-162a-4c55-bf8f-ddfe11dc9e3a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:46.579316 kubelet[2763]: E1104 20:05:46.579279 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 20:05:46.579373 kubelet[2763]: E1104 20:05:46.579345 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:46.579373 kubelet[2763]: E1104 20:05:46.579366 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgkc6" Nov 4 20:05:46.579463 kubelet[2763]: E1104 20:05:46.579432 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc7002d5a26334d5a4f08641f7d6fa72d3c71d84074d264c4615b906f837e06f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:52.826260 kubelet[2763]: I1104 20:05:52.826215 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 20:05:52.827547 kubelet[2763]: E1104 20:05:52.827513 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:53.524036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463566692.mount: Deactivated successfully. Nov 4 20:05:53.605039 kubelet[2763]: E1104 20:05:53.604983 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:54.558530 containerd[1607]: time="2025-11-04T20:05:54.558456164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:54.583903 containerd[1607]: time="2025-11-04T20:05:54.583804713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 20:05:54.648319 containerd[1607]: time="2025-11-04T20:05:54.648262221Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:54.722459 containerd[1607]: time="2025-11-04T20:05:54.722392667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 20:05:54.722938 containerd[1607]: time="2025-11-04T20:05:54.722908814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.136382203s" Nov 4 20:05:54.722993 containerd[1607]: time="2025-11-04T20:05:54.722939472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 20:05:54.792839 containerd[1607]: time="2025-11-04T20:05:54.792766964Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 20:05:54.936257 containerd[1607]: time="2025-11-04T20:05:54.936197604Z" level=info msg="Container 4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:55.057473 containerd[1607]: time="2025-11-04T20:05:55.057425666Z" level=info msg="CreateContainer within sandbox \"147192972e08cbd11c1913cf977cd5668e78cc062c1aae63c932faef2c57046b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340\"" Nov 4 20:05:55.058033 containerd[1607]: time="2025-11-04T20:05:55.057982690Z" level=info msg="StartContainer for \"4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340\"" Nov 4 20:05:55.059641 containerd[1607]: time="2025-11-04T20:05:55.059616393Z" level=info msg="connecting to shim 4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340" address="unix:///run/containerd/s/e90d05c6c24eb58b0b04d471936196bb8806f94745da60e3cc89fac3e8463b5b" protocol=ttrpc version=3 Nov 4 20:05:55.091175 systemd[1]: Started cri-containerd-4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340.scope - libcontainer container 4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340. Nov 4 20:05:55.131292 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:37522.service - OpenSSH per-connection server daemon (10.0.0.1:37522). Nov 4 20:05:55.161350 containerd[1607]: time="2025-11-04T20:05:55.161285119Z" level=info msg="StartContainer for \"4739404efd8c03e59a92b2ef9cd971e9b94df4719d3932ef0bc365be16094340\" returns successfully" Nov 4 20:05:55.211239 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 37522 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:05:55.213512 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:05:55.220552 systemd-logind[1575]: New session 9 of user core. Nov 4 20:05:55.227208 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 20:05:55.248486 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 20:05:55.248688 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 20:05:55.381763 sshd[3898]: Connection closed by 10.0.0.1 port 37522 Nov 4 20:05:55.384187 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Nov 4 20:05:55.388524 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:37522.service: Deactivated successfully. Nov 4 20:05:55.391193 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 20:05:55.393857 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Nov 4 20:05:55.395329 systemd-logind[1575]: Removed session 9. Nov 4 20:05:55.454564 kubelet[2763]: I1104 20:05:55.454515 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgnm\" (UniqueName: \"kubernetes.io/projected/7a09dd2b-bdf2-470b-8c32-8b78634b3660-kube-api-access-ghgnm\") pod \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " Nov 4 20:05:55.454564 kubelet[2763]: I1104 20:05:55.454569 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-backend-key-pair\") pod \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " Nov 4 20:05:55.455009 kubelet[2763]: I1104 20:05:55.454590 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-ca-bundle\") pod \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\" (UID: \"7a09dd2b-bdf2-470b-8c32-8b78634b3660\") " Nov 4 20:05:55.455579 kubelet[2763]: I1104 20:05:55.455558 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7a09dd2b-bdf2-470b-8c32-8b78634b3660" (UID: "7a09dd2b-bdf2-470b-8c32-8b78634b3660"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 20:05:55.458943 kubelet[2763]: I1104 20:05:55.458857 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7a09dd2b-bdf2-470b-8c32-8b78634b3660" (UID: "7a09dd2b-bdf2-470b-8c32-8b78634b3660"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 20:05:55.459821 systemd[1]: var-lib-kubelet-pods-7a09dd2b\x2dbdf2\x2d470b\x2d8c32\x2d8b78634b3660-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghgnm.mount: Deactivated successfully. Nov 4 20:05:55.459966 systemd[1]: var-lib-kubelet-pods-7a09dd2b\x2dbdf2\x2d470b\x2d8c32\x2d8b78634b3660-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 20:05:55.460774 kubelet[2763]: I1104 20:05:55.460731 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a09dd2b-bdf2-470b-8c32-8b78634b3660-kube-api-access-ghgnm" (OuterVolumeSpecName: "kube-api-access-ghgnm") pod "7a09dd2b-bdf2-470b-8c32-8b78634b3660" (UID: "7a09dd2b-bdf2-470b-8c32-8b78634b3660"). InnerVolumeSpecName "kube-api-access-ghgnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 20:05:55.508271 systemd[1]: Removed slice kubepods-besteffort-pod7a09dd2b_bdf2_470b_8c32_8b78634b3660.slice - libcontainer container kubepods-besteffort-pod7a09dd2b_bdf2_470b_8c32_8b78634b3660.slice. Nov 4 20:05:55.555160 kubelet[2763]: I1104 20:05:55.555111 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghgnm\" (UniqueName: \"kubernetes.io/projected/7a09dd2b-bdf2-470b-8c32-8b78634b3660-kube-api-access-ghgnm\") on node \"localhost\" DevicePath \"\"" Nov 4 20:05:55.555160 kubelet[2763]: I1104 20:05:55.555138 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 20:05:55.555160 kubelet[2763]: I1104 20:05:55.555147 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a09dd2b-bdf2-470b-8c32-8b78634b3660-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 20:05:55.611414 kubelet[2763]: E1104 20:05:55.611377 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:55.648010 kubelet[2763]: I1104 20:05:55.647875 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5jh27" podStartSLOduration=2.257012188 podStartE2EDuration="22.647852727s" podCreationTimestamp="2025-11-04 20:05:33 +0000 UTC" firstStartedPulling="2025-11-04 20:05:34.332709618 +0000 UTC m=+18.927407191" lastFinishedPulling="2025-11-04 20:05:54.723550157 +0000 UTC m=+39.318247730" observedRunningTime="2025-11-04 20:05:55.647358891 +0000 UTC m=+40.242056474" watchObservedRunningTime="2025-11-04 20:05:55.647852727 +0000 UTC m=+40.242550310" Nov 4 20:05:55.669117 systemd[1]: Created slice kubepods-besteffort-poddd2798b7_1698_43cb_8c9c_5b6836607d10.slice - libcontainer container kubepods-besteffort-poddd2798b7_1698_43cb_8c9c_5b6836607d10.slice. Nov 4 20:05:55.757211 kubelet[2763]: I1104 20:05:55.757165 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd2798b7-1698-43cb-8c9c-5b6836607d10-whisker-ca-bundle\") pod \"whisker-65f9649fdf-5jqt6\" (UID: \"dd2798b7-1698-43cb-8c9c-5b6836607d10\") " pod="calico-system/whisker-65f9649fdf-5jqt6" Nov 4 20:05:55.757211 kubelet[2763]: I1104 20:05:55.757211 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd2798b7-1698-43cb-8c9c-5b6836607d10-whisker-backend-key-pair\") pod \"whisker-65f9649fdf-5jqt6\" (UID: \"dd2798b7-1698-43cb-8c9c-5b6836607d10\") " pod="calico-system/whisker-65f9649fdf-5jqt6" Nov 4 20:05:55.757211 kubelet[2763]: I1104 20:05:55.757225 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdd8\" (UniqueName: \"kubernetes.io/projected/dd2798b7-1698-43cb-8c9c-5b6836607d10-kube-api-access-qhdd8\") pod \"whisker-65f9649fdf-5jqt6\" (UID: \"dd2798b7-1698-43cb-8c9c-5b6836607d10\") " pod="calico-system/whisker-65f9649fdf-5jqt6" Nov 4 20:05:55.972659 containerd[1607]: time="2025-11-04T20:05:55.972609905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f9649fdf-5jqt6,Uid:dd2798b7-1698-43cb-8c9c-5b6836607d10,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:56.108490 systemd-networkd[1495]: calic6a7aa4917a: Link UP Nov 4 20:05:56.108696 systemd-networkd[1495]: calic6a7aa4917a: Gained carrier Nov 4 20:05:56.289819 containerd[1607]: 2025-11-04 20:05:55.993 [INFO][3938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 20:05:56.289819 containerd[1607]: 2025-11-04 20:05:56.009 [INFO][3938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--65f9649fdf--5jqt6-eth0 whisker-65f9649fdf- calico-system dd2798b7-1698-43cb-8c9c-5b6836607d10 996 0 2025-11-04 20:05:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65f9649fdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-65f9649fdf-5jqt6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic6a7aa4917a [] [] }} ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-" Nov 4 20:05:56.289819 containerd[1607]: 2025-11-04 20:05:56.009 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.289819 containerd[1607]: 2025-11-04 20:05:56.068 [INFO][3953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" HandleID="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Workload="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.068 [INFO][3953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" HandleID="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Workload="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-65f9649fdf-5jqt6", "timestamp":"2025-11-04 20:05:56.068235731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.068 [INFO][3953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.069 [INFO][3953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.069 [INFO][3953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.075 [INFO][3953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" host="localhost" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.080 [INFO][3953] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.083 [INFO][3953] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.084 [INFO][3953] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.086 [INFO][3953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.290094 containerd[1607]: 2025-11-04 20:05:56.086 [INFO][3953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" host="localhost" Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.088 [INFO][3953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7 Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.092 [INFO][3953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" host="localhost" Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.097 [INFO][3953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" host="localhost" Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.097 [INFO][3953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" host="localhost" Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.097 [INFO][3953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:56.290503 containerd[1607]: 2025-11-04 20:05:56.097 [INFO][3953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" HandleID="k8s-pod-network.32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Workload="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.290626 containerd[1607]: 2025-11-04 20:05:56.100 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65f9649fdf--5jqt6-eth0", GenerateName:"whisker-65f9649fdf-", Namespace:"calico-system", SelfLink:"", UID:"dd2798b7-1698-43cb-8c9c-5b6836607d10", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f9649fdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-65f9649fdf-5jqt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic6a7aa4917a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.290626 containerd[1607]: 2025-11-04 20:05:56.101 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.290703 containerd[1607]: 2025-11-04 20:05:56.101 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6a7aa4917a ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.290703 containerd[1607]: 2025-11-04 20:05:56.109 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.290772 containerd[1607]: 2025-11-04 20:05:56.110 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65f9649fdf--5jqt6-eth0", GenerateName:"whisker-65f9649fdf-", Namespace:"calico-system", SelfLink:"", UID:"dd2798b7-1698-43cb-8c9c-5b6836607d10", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f9649fdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7", Pod:"whisker-65f9649fdf-5jqt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic6a7aa4917a", MAC:"aa:4d:0b:34:6e:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.290822 containerd[1607]: 2025-11-04 20:05:56.286 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" Namespace="calico-system" Pod="whisker-65f9649fdf-5jqt6" WorkloadEndpoint="localhost-k8s-whisker--65f9649fdf--5jqt6-eth0" Nov 4 20:05:56.430242 containerd[1607]: time="2025-11-04T20:05:56.430193879Z" level=info msg="connecting to shim 32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7" address="unix:///run/containerd/s/15f85dfe79aac12c32f9c5ef4bfde02a74ab0bc368aa830d37913429e94b8030" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:56.460194 systemd[1]: Started cri-containerd-32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7.scope - libcontainer container 32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7. Nov 4 20:05:56.474146 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:56.499072 kubelet[2763]: E1104 20:05:56.498709 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:56.499529 containerd[1607]: time="2025-11-04T20:05:56.499223115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4cd748b4-xzkvr,Uid:45ba7d0b-5883-4d92-9d1d-2bfad2cab22b,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:56.500209 containerd[1607]: time="2025-11-04T20:05:56.500163087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85sqf,Uid:a1c9f891-3697-476b-83d7-d7d21e81d397,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:56.511259 containerd[1607]: time="2025-11-04T20:05:56.511214140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f9649fdf-5jqt6,Uid:dd2798b7-1698-43cb-8c9c-5b6836607d10,Namespace:calico-system,Attempt:0,} returns sandbox id \"32e2caec58615378a18cb1e0f472b635475b45eb7843a215f04334c874f256b7\"" Nov 4 20:05:56.519974 containerd[1607]: time="2025-11-04T20:05:56.518731630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 20:05:56.623447 kubelet[2763]: I1104 20:05:56.623394 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 20:05:56.625274 kubelet[2763]: E1104 20:05:56.625233 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:56.719621 systemd-networkd[1495]: calie569260d9d6: Link UP Nov 4 20:05:56.721257 systemd-networkd[1495]: calie569260d9d6: Gained carrier Nov 4 20:05:56.738453 containerd[1607]: 2025-11-04 20:05:56.588 [INFO][4033] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 20:05:56.738453 containerd[1607]: 2025-11-04 20:05:56.601 [INFO][4033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--85sqf-eth0 coredns-674b8bbfcf- kube-system a1c9f891-3697-476b-83d7-d7d21e81d397 882 0 2025-11-04 20:05:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-85sqf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie569260d9d6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-" Nov 4 20:05:56.738453 containerd[1607]: 2025-11-04 20:05:56.601 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.738453 containerd[1607]: 2025-11-04 20:05:56.657 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" HandleID="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Workload="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.658 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" HandleID="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Workload="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-85sqf", "timestamp":"2025-11-04 20:05:56.657327849 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.658 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.658 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.658 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.670 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" host="localhost" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.675 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.680 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.682 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.685 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.738683 containerd[1607]: 2025-11-04 20:05:56.685 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" host="localhost" Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.688 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43 Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.701 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" host="localhost" Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.708 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" host="localhost" Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.708 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" host="localhost" Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.708 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:56.738909 containerd[1607]: 2025-11-04 20:05:56.708 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" HandleID="k8s-pod-network.ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Workload="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.739050 containerd[1607]: 2025-11-04 20:05:56.711 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--85sqf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a1c9f891-3697-476b-83d7-d7d21e81d397", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-85sqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie569260d9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.739117 containerd[1607]: 2025-11-04 20:05:56.712 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.739117 containerd[1607]: 2025-11-04 20:05:56.712 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie569260d9d6 ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.739117 containerd[1607]: 2025-11-04 20:05:56.722 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.739201 containerd[1607]: 2025-11-04 20:05:56.724 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--85sqf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a1c9f891-3697-476b-83d7-d7d21e81d397", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43", Pod:"coredns-674b8bbfcf-85sqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie569260d9d6", MAC:"9e:52:d4:c6:26:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.739201 containerd[1607]: 2025-11-04 20:05:56.732 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" Namespace="kube-system" Pod="coredns-674b8bbfcf-85sqf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85sqf-eth0" Nov 4 20:05:56.775434 containerd[1607]: time="2025-11-04T20:05:56.775317745Z" level=info msg="connecting to shim ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43" address="unix:///run/containerd/s/7610e7f5fb359c4172599cf9045839971ca3f6a8f6af764f19d394b2f2526f69" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:56.814322 systemd[1]: Started cri-containerd-ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43.scope - libcontainer container ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43. Nov 4 20:05:56.828498 systemd-networkd[1495]: calie048da87462: Link UP Nov 4 20:05:56.828709 systemd-networkd[1495]: calie048da87462: Gained carrier Nov 4 20:05:56.834543 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.566 [INFO][4034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.587 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0 calico-kube-controllers-5b4cd748b4- calico-system 45ba7d0b-5883-4d92-9d1d-2bfad2cab22b 887 0 2025-11-04 20:05:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b4cd748b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b4cd748b4-xzkvr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie048da87462 [] [] }} ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.587 [INFO][4034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.710 [INFO][4130] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" HandleID="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Workload="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.711 [INFO][4130] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" HandleID="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Workload="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b4cd748b4-xzkvr", "timestamp":"2025-11-04 20:05:56.710004674 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.711 [INFO][4130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.711 [INFO][4130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.711 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.774 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.780 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.790 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.792 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.797 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.797 [INFO][4130] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.801 [INFO][4130] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.807 [INFO][4130] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.816 [INFO][4130] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.818 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" host="localhost" Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.819 [INFO][4130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:56.845853 containerd[1607]: 2025-11-04 20:05:56.821 [INFO][4130] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" HandleID="k8s-pod-network.2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Workload="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.824 [INFO][4034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0", GenerateName:"calico-kube-controllers-5b4cd748b4-", Namespace:"calico-system", SelfLink:"", UID:"45ba7d0b-5883-4d92-9d1d-2bfad2cab22b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4cd748b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b4cd748b4-xzkvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie048da87462", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.825 [INFO][4034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.825 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie048da87462 ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.828 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.829 [INFO][4034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0", GenerateName:"calico-kube-controllers-5b4cd748b4-", Namespace:"calico-system", SelfLink:"", UID:"45ba7d0b-5883-4d92-9d1d-2bfad2cab22b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b4cd748b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d", Pod:"calico-kube-controllers-5b4cd748b4-xzkvr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie048da87462", MAC:"be:50:5f:72:b3:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:56.846806 containerd[1607]: 2025-11-04 20:05:56.841 [INFO][4034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" Namespace="calico-system" Pod="calico-kube-controllers-5b4cd748b4-xzkvr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b4cd748b4--xzkvr-eth0" Nov 4 20:05:56.873327 containerd[1607]: time="2025-11-04T20:05:56.873252839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85sqf,Uid:a1c9f891-3697-476b-83d7-d7d21e81d397,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43\"" Nov 4 20:05:56.874418 kubelet[2763]: E1104 20:05:56.874151 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:56.878406 containerd[1607]: time="2025-11-04T20:05:56.878328803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:56.879858 containerd[1607]: time="2025-11-04T20:05:56.879805712Z" level=info msg="connecting to shim 2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d" address="unix:///run/containerd/s/91ed55b97d574cc3c31181d069768aedd94d62bae2e2dd1e06a5ac23a1ce4c7b" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:56.880351 containerd[1607]: time="2025-11-04T20:05:56.880304566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 20:05:56.880403 containerd[1607]: time="2025-11-04T20:05:56.880361994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:56.881295 kubelet[2763]: E1104 20:05:56.881089 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:05:56.881295 kubelet[2763]: E1104 20:05:56.881129 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:05:56.882840 containerd[1607]: time="2025-11-04T20:05:56.882807879Z" level=info msg="CreateContainer within sandbox \"ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 20:05:56.886702 kubelet[2763]: E1104 20:05:56.886651 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:420df1ec3eda4f75af4e03716b882180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:56.889209 containerd[1607]: time="2025-11-04T20:05:56.889173500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 20:05:56.906996 containerd[1607]: time="2025-11-04T20:05:56.906917499Z" level=info msg="Container baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:56.926330 systemd[1]: Started cri-containerd-2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d.scope - libcontainer container 2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d. Nov 4 20:05:56.934754 containerd[1607]: time="2025-11-04T20:05:56.933753029Z" level=info msg="CreateContainer within sandbox \"ca733c7ba531f5cddbacae91fa43c3b84fd9d02a23af378886b2759369f59a43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee\"" Nov 4 20:05:56.935857 containerd[1607]: time="2025-11-04T20:05:56.935803963Z" level=info msg="StartContainer for \"baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee\"" Nov 4 20:05:56.937524 containerd[1607]: time="2025-11-04T20:05:56.937472279Z" level=info msg="connecting to shim baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee" address="unix:///run/containerd/s/7610e7f5fb359c4172599cf9045839971ca3f6a8f6af764f19d394b2f2526f69" protocol=ttrpc version=3 Nov 4 20:05:56.950982 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:56.965257 systemd[1]: Started cri-containerd-baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee.scope - libcontainer container baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee. Nov 4 20:05:57.009336 containerd[1607]: time="2025-11-04T20:05:57.009290564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b4cd748b4-xzkvr,Uid:45ba7d0b-5883-4d92-9d1d-2bfad2cab22b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d53647e9d9c00778d6cd0c09d16e5f3be9ac125dbfcdb1c65e1c6120c936e9d\"" Nov 4 20:05:57.009722 containerd[1607]: time="2025-11-04T20:05:57.009533969Z" level=info msg="StartContainer for \"baa54070d9f30999b2c1b9e970afbd1cb779344c6eabeab36b4636ef31db37ee\" returns successfully" Nov 4 20:05:57.163855 systemd-networkd[1495]: vxlan.calico: Link UP Nov 4 20:05:57.163864 systemd-networkd[1495]: vxlan.calico: Gained carrier Nov 4 20:05:57.309334 containerd[1607]: time="2025-11-04T20:05:57.309267115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:57.313464 containerd[1607]: time="2025-11-04T20:05:57.313383771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 20:05:57.313464 containerd[1607]: time="2025-11-04T20:05:57.313445056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:57.313725 kubelet[2763]: E1104 20:05:57.313658 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:05:57.313725 kubelet[2763]: E1104 20:05:57.313711 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:05:57.314161 containerd[1607]: time="2025-11-04T20:05:57.314127756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 20:05:57.314211 kubelet[2763]: E1104 20:05:57.314137 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:57.315541 kubelet[2763]: E1104 20:05:57.315491 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f9649fdf-5jqt6" podUID="dd2798b7-1698-43cb-8c9c-5b6836607d10" Nov 4 20:05:57.499521 containerd[1607]: time="2025-11-04T20:05:57.499378233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgkc6,Uid:89d56747-162a-4c55-bf8f-ddfe11dc9e3a,Namespace:calico-system,Attempt:0,}" Nov 4 20:05:57.501614 kubelet[2763]: I1104 20:05:57.501581 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a09dd2b-bdf2-470b-8c32-8b78634b3660" path="/var/lib/kubelet/pods/7a09dd2b-bdf2-470b-8c32-8b78634b3660/volumes" Nov 4 20:05:57.604312 systemd-networkd[1495]: calibc0b8483f76: Link UP Nov 4 20:05:57.604514 systemd-networkd[1495]: calibc0b8483f76: Gained carrier Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.539 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lgkc6-eth0 csi-node-driver- calico-system 89d56747-162a-4c55-bf8f-ddfe11dc9e3a 768 0 2025-11-04 20:05:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lgkc6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc0b8483f76 [] [] }} ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.539 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.564 [INFO][4415] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" HandleID="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Workload="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.565 [INFO][4415] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" HandleID="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Workload="localhost-k8s-csi--node--driver--lgkc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cd590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lgkc6", "timestamp":"2025-11-04 20:05:57.564937969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.565 [INFO][4415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.565 [INFO][4415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.565 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.571 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.579 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.583 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.584 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.586 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.586 [INFO][4415] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.588 [INFO][4415] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637 Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.591 [INFO][4415] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.597 [INFO][4415] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.597 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" host="localhost" Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.598 [INFO][4415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:57.615708 containerd[1607]: 2025-11-04 20:05:57.598 [INFO][4415] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" HandleID="k8s-pod-network.14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Workload="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.601 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lgkc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d56747-162a-4c55-bf8f-ddfe11dc9e3a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lgkc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0b8483f76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.601 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.601 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc0b8483f76 ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.603 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.603 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lgkc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d56747-162a-4c55-bf8f-ddfe11dc9e3a", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637", Pod:"csi-node-driver-lgkc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0b8483f76", MAC:"7e:cf:8a:cb:40:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:57.616421 containerd[1607]: 2025-11-04 20:05:57.612 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" Namespace="calico-system" Pod="csi-node-driver-lgkc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--lgkc6-eth0" Nov 4 20:05:57.626325 kubelet[2763]: E1104 20:05:57.626185 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:57.631653 kubelet[2763]: E1104 20:05:57.631601 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f9649fdf-5jqt6" podUID="dd2798b7-1698-43cb-8c9c-5b6836607d10" Nov 4 20:05:57.640653 kubelet[2763]: I1104 20:05:57.640536 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-85sqf" podStartSLOduration=36.640500815 podStartE2EDuration="36.640500815s" podCreationTimestamp="2025-11-04 20:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:57.640323463 +0000 UTC m=+42.235021046" watchObservedRunningTime="2025-11-04 20:05:57.640500815 +0000 UTC m=+42.235198398" Nov 4 20:05:57.645575 containerd[1607]: time="2025-11-04T20:05:57.645502280Z" level=info msg="connecting to shim 14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637" address="unix:///run/containerd/s/1c99c4b353e8774e96dc3274608368079f1d40de741b6646c687c2ebbf1d6100" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:57.650285 containerd[1607]: time="2025-11-04T20:05:57.650229531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:57.652132 containerd[1607]: time="2025-11-04T20:05:57.652059631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 20:05:57.652244 containerd[1607]: time="2025-11-04T20:05:57.652071864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:57.652510 kubelet[2763]: E1104 20:05:57.652453 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:05:57.652556 kubelet[2763]: E1104 20:05:57.652509 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:05:57.652826 kubelet[2763]: E1104 20:05:57.652646 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9j78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b4cd748b4-xzkvr_calico-system(45ba7d0b-5883-4d92-9d1d-2bfad2cab22b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:57.656546 kubelet[2763]: E1104 20:05:57.656485 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:05:57.684506 systemd[1]: Started cri-containerd-14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637.scope - libcontainer container 14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637. Nov 4 20:05:57.700186 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:57.719525 containerd[1607]: time="2025-11-04T20:05:57.719468965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgkc6,Uid:89d56747-162a-4c55-bf8f-ddfe11dc9e3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"14a6de49f47bf7087175c5dbc311058f5dd036ce551c9e57ac1e02d429160637\"" Nov 4 20:05:57.721218 containerd[1607]: time="2025-11-04T20:05:57.721170774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 20:05:58.019176 systemd-networkd[1495]: calie569260d9d6: Gained IPv6LL Nov 4 20:05:58.019551 systemd-networkd[1495]: calie048da87462: Gained IPv6LL Nov 4 20:05:58.078756 containerd[1607]: time="2025-11-04T20:05:58.078680192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:58.079819 containerd[1607]: time="2025-11-04T20:05:58.079782558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 20:05:58.079888 containerd[1607]: time="2025-11-04T20:05:58.079868940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:58.080095 kubelet[2763]: E1104 20:05:58.080056 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:05:58.080147 kubelet[2763]: E1104 20:05:58.080108 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:05:58.080306 kubelet[2763]: E1104 20:05:58.080263 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:58.082428 containerd[1607]: time="2025-11-04T20:05:58.082349709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 20:05:58.084162 systemd-networkd[1495]: calic6a7aa4917a: Gained IPv6LL Nov 4 20:05:58.419651 containerd[1607]: time="2025-11-04T20:05:58.419585029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:58.421120 containerd[1607]: time="2025-11-04T20:05:58.421085902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 20:05:58.421193 containerd[1607]: time="2025-11-04T20:05:58.421138982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:58.421368 kubelet[2763]: E1104 20:05:58.421322 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:05:58.421424 kubelet[2763]: E1104 20:05:58.421380 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:05:58.421582 kubelet[2763]: E1104 20:05:58.421527 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:58.422824 kubelet[2763]: E1104 20:05:58.422734 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:58.499048 kubelet[2763]: E1104 20:05:58.498812 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:58.499197 containerd[1607]: time="2025-11-04T20:05:58.499093245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-vc5nj,Uid:c04ecbad-d5c2-43d0-b16a-235b2a29a278,Namespace:calico-apiserver,Attempt:0,}" Nov 4 20:05:58.499422 containerd[1607]: time="2025-11-04T20:05:58.499369082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48stb,Uid:33f5b0fe-ea26-4e97-a06f-e2a58710cc60,Namespace:kube-system,Attempt:0,}" Nov 4 20:05:58.499507 containerd[1607]: time="2025-11-04T20:05:58.499478678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-2sm8f,Uid:a8754e25-0820-405c-8ad6-8e109ea21a48,Namespace:calico-apiserver,Attempt:0,}" Nov 4 20:05:58.633999 kubelet[2763]: E1104 20:05:58.633926 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:58.641041 kubelet[2763]: E1104 20:05:58.639909 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:05:58.646327 kubelet[2763]: E1104 20:05:58.645205 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:58.660141 systemd-networkd[1495]: cali2457c3c6fc7: Link UP Nov 4 20:05:58.660425 systemd-networkd[1495]: cali2457c3c6fc7: Gained carrier Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.561 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0 calico-apiserver-5bf8445db8- calico-apiserver a8754e25-0820-405c-8ad6-8e109ea21a48 885 0 2025-11-04 20:05:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf8445db8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bf8445db8-2sm8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2457c3c6fc7 [] [] }} ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.561 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4528] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" HandleID="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Workload="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4528] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" HandleID="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Workload="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bf8445db8-2sm8f", "timestamp":"2025-11-04 20:05:58.602327376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.608 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.612 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.615 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.617 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.619 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.619 [INFO][4528] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.620 [INFO][4528] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.626 [INFO][4528] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.633 [INFO][4528] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.634 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" host="localhost" Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.638 [INFO][4528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:58.682407 containerd[1607]: 2025-11-04 20:05:58.638 [INFO][4528] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" HandleID="k8s-pod-network.0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Workload="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.651 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0", GenerateName:"calico-apiserver-5bf8445db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8754e25-0820-405c-8ad6-8e109ea21a48", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8445db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bf8445db8-2sm8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2457c3c6fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.651 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.651 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2457c3c6fc7 ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.666 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.666 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0", GenerateName:"calico-apiserver-5bf8445db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8754e25-0820-405c-8ad6-8e109ea21a48", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8445db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b", Pod:"calico-apiserver-5bf8445db8-2sm8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2457c3c6fc7", MAC:"ca:76:70:5d:fc:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.682976 containerd[1607]: 2025-11-04 20:05:58.674 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-2sm8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--2sm8f-eth0" Nov 4 20:05:58.713977 containerd[1607]: time="2025-11-04T20:05:58.713923089Z" level=info msg="connecting to shim 0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b" address="unix:///run/containerd/s/f9438633ec8a96e08fdd660e57b498c3349f3085abf1f086418977590ed00c75" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:58.741827 systemd-networkd[1495]: calif2325265fd6: Link UP Nov 4 20:05:58.744593 systemd-networkd[1495]: calif2325265fd6: Gained carrier Nov 4 20:05:58.750267 systemd[1]: Started cri-containerd-0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b.scope - libcontainer container 0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b. Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.559 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--48stb-eth0 coredns-674b8bbfcf- kube-system 33f5b0fe-ea26-4e97-a06f-e2a58710cc60 883 0 2025-11-04 20:05:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-48stb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2325265fd6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.560 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4531] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" HandleID="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Workload="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4531] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" HandleID="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Workload="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-48stb", "timestamp":"2025-11-04 20:05:58.602505078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.602 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.638 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.638 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.709 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.713 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.717 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.718 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.721 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.721 [INFO][4531] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.722 [INFO][4531] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34 Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.726 [INFO][4531] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4531] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" host="localhost" Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:58.758399 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4531] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" HandleID="k8s-pod-network.c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Workload="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.737 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--48stb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"33f5b0fe-ea26-4e97-a06f-e2a58710cc60", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-48stb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2325265fd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.738 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.738 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2325265fd6 ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.745 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.745 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--48stb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"33f5b0fe-ea26-4e97-a06f-e2a58710cc60", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34", Pod:"coredns-674b8bbfcf-48stb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2325265fd6", MAC:"c6:f4:a1:ed:d9:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.758948 containerd[1607]: 2025-11-04 20:05:58.755 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" Namespace="kube-system" Pod="coredns-674b8bbfcf-48stb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--48stb-eth0" Nov 4 20:05:58.767187 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:58.781589 containerd[1607]: time="2025-11-04T20:05:58.781526661Z" level=info msg="connecting to shim c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34" address="unix:///run/containerd/s/5ceede180ec0a54b67ec1c80cade5cf78656b960f191d618025d4a2b310f318c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:58.813185 systemd[1]: Started cri-containerd-c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34.scope - libcontainer container c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34. Nov 4 20:05:58.818169 containerd[1607]: time="2025-11-04T20:05:58.818132868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-2sm8f,Uid:a8754e25-0820-405c-8ad6-8e109ea21a48,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0aa797549a11155d4381817894cd1bc6cca66d01a678f87e03ced86cb2a5b69b\"" Nov 4 20:05:58.820450 containerd[1607]: time="2025-11-04T20:05:58.820410096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:05:58.831466 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:58.845407 systemd-networkd[1495]: califda4079f4bb: Link UP Nov 4 20:05:58.846185 systemd-networkd[1495]: califda4079f4bb: Gained carrier Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.561 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0 calico-apiserver-5bf8445db8- calico-apiserver c04ecbad-d5c2-43d0-b16a-235b2a29a278 877 0 2025-11-04 20:05:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf8445db8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bf8445db8-vc5nj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califda4079f4bb [] [] }} ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.561 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.603 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" HandleID="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Workload="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.604 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" HandleID="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Workload="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050db10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bf8445db8-vc5nj", "timestamp":"2025-11-04 20:05:58.603864336 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.604 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.734 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.810 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.815 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.821 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.825 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.827 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.827 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.828 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.831 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.838 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.838 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" host="localhost" Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.838 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:05:58.867717 containerd[1607]: 2025-11-04 20:05:58.838 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" HandleID="k8s-pod-network.35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Workload="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.841 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0", GenerateName:"calico-apiserver-5bf8445db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c04ecbad-d5c2-43d0-b16a-235b2a29a278", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8445db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bf8445db8-vc5nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califda4079f4bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.841 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.842 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califda4079f4bb ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.846 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.846 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0", GenerateName:"calico-apiserver-5bf8445db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c04ecbad-d5c2-43d0-b16a-235b2a29a278", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf8445db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a", Pod:"calico-apiserver-5bf8445db8-vc5nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califda4079f4bb", MAC:"46:01:f1:c4:d4:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:05:58.868283 containerd[1607]: 2025-11-04 20:05:58.863 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" Namespace="calico-apiserver" Pod="calico-apiserver-5bf8445db8-vc5nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bf8445db8--vc5nj-eth0" Nov 4 20:05:58.874683 containerd[1607]: time="2025-11-04T20:05:58.874605855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48stb,Uid:33f5b0fe-ea26-4e97-a06f-e2a58710cc60,Namespace:kube-system,Attempt:0,} returns sandbox id \"c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34\"" Nov 4 20:05:58.875802 kubelet[2763]: E1104 20:05:58.875770 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:58.885279 containerd[1607]: time="2025-11-04T20:05:58.885233576Z" level=info msg="CreateContainer within sandbox \"c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 20:05:58.899734 containerd[1607]: time="2025-11-04T20:05:58.899298642Z" level=info msg="Container 569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3: CDI devices from CRI Config.CDIDevices: []" Nov 4 20:05:58.904353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060073730.mount: Deactivated successfully. Nov 4 20:05:58.907389 containerd[1607]: time="2025-11-04T20:05:58.907324796Z" level=info msg="connecting to shim 35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a" address="unix:///run/containerd/s/8f512f1f8704c248ab7d8daadd7b428af8c0d1be604c6c851364c4cba31c8a83" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:05:58.912294 containerd[1607]: time="2025-11-04T20:05:58.912253334Z" level=info msg="CreateContainer within sandbox \"c870241dcb28174faf96d0fecccf16c54de2a132a244083ac77f3d838ce7eb34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3\"" Nov 4 20:05:58.913221 containerd[1607]: time="2025-11-04T20:05:58.913185131Z" level=info msg="StartContainer for \"569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3\"" Nov 4 20:05:58.914431 containerd[1607]: time="2025-11-04T20:05:58.914347800Z" level=info msg="connecting to shim 569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3" address="unix:///run/containerd/s/5ceede180ec0a54b67ec1c80cade5cf78656b960f191d618025d4a2b310f318c" protocol=ttrpc version=3 Nov 4 20:05:58.915306 systemd-networkd[1495]: calibc0b8483f76: Gained IPv6LL Nov 4 20:05:58.930210 systemd[1]: Started cri-containerd-35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a.scope - libcontainer container 35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a. Nov 4 20:05:58.933539 systemd[1]: Started cri-containerd-569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3.scope - libcontainer container 569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3. Nov 4 20:05:58.945706 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:05:58.972113 containerd[1607]: time="2025-11-04T20:05:58.972054821Z" level=info msg="StartContainer for \"569fd49b31fbdecdb4fe17c0bc8ec509e3a0e6b3d219e9fd1c6d61c8bfd052c3\" returns successfully" Nov 4 20:05:58.983305 containerd[1607]: time="2025-11-04T20:05:58.983269052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf8445db8-vc5nj,Uid:c04ecbad-d5c2-43d0-b16a-235b2a29a278,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"35db9fa65dff5ddeae39fd1de23d622fe4dd8ec1e94463eca70bc9e6414c575a\"" Nov 4 20:05:59.174240 systemd-networkd[1495]: vxlan.calico: Gained IPv6LL Nov 4 20:05:59.179688 containerd[1607]: time="2025-11-04T20:05:59.179648740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:59.252076 containerd[1607]: time="2025-11-04T20:05:59.251912171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:59.252076 containerd[1607]: time="2025-11-04T20:05:59.251971863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:05:59.252330 kubelet[2763]: E1104 20:05:59.252285 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:05:59.252390 kubelet[2763]: E1104 20:05:59.252335 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:05:59.252849 kubelet[2763]: E1104 20:05:59.252637 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5bck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-2sm8f_calico-apiserver(a8754e25-0820-405c-8ad6-8e109ea21a48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:59.252984 containerd[1607]: time="2025-11-04T20:05:59.252693105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:05:59.253953 kubelet[2763]: E1104 20:05:59.253907 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:05:59.635878 kubelet[2763]: E1104 20:05:59.635844 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:59.637591 kubelet[2763]: E1104 20:05:59.637569 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:05:59.637728 containerd[1607]: time="2025-11-04T20:05:59.637560100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:05:59.639234 kubelet[2763]: E1104 20:05:59.639203 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:05:59.639951 kubelet[2763]: E1104 20:05:59.639882 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:05:59.686481 containerd[1607]: time="2025-11-04T20:05:59.686349959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:05:59.686481 containerd[1607]: time="2025-11-04T20:05:59.686380416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:05:59.686662 kubelet[2763]: E1104 20:05:59.686592 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:05:59.686662 kubelet[2763]: E1104 20:05:59.686636 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:05:59.686825 kubelet[2763]: E1104 20:05:59.686780 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l52rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-vc5nj_calico-apiserver(c04ecbad-d5c2-43d0-b16a-235b2a29a278): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:05:59.688032 kubelet[2763]: E1104 20:05:59.687959 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:05:59.828795 kubelet[2763]: I1104 20:05:59.828720 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-48stb" podStartSLOduration=38.828703555 podStartE2EDuration="38.828703555s" podCreationTimestamp="2025-11-04 20:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 20:05:59.828414092 +0000 UTC m=+44.423111675" watchObservedRunningTime="2025-11-04 20:05:59.828703555 +0000 UTC m=+44.423401138" Nov 4 20:06:00.387200 systemd-networkd[1495]: calif2325265fd6: Gained IPv6LL Nov 4 20:06:00.402402 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:58334.service - OpenSSH per-connection server daemon (10.0.0.1:58334). Nov 4 20:06:00.490769 sshd[4756]: Accepted publickey for core from 10.0.0.1 port 58334 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:00.493535 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:00.499824 containerd[1607]: time="2025-11-04T20:06:00.499689205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z9g8r,Uid:4d779447-aab7-4044-9468-fe0588e362f2,Namespace:calico-system,Attempt:0,}" Nov 4 20:06:00.504207 systemd-logind[1575]: New session 10 of user core. Nov 4 20:06:00.508354 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 20:06:00.579624 systemd-networkd[1495]: cali2457c3c6fc7: Gained IPv6LL Nov 4 20:06:00.641514 kubelet[2763]: E1104 20:06:00.641382 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:00.643834 systemd-networkd[1495]: califda4079f4bb: Gained IPv6LL Nov 4 20:06:00.647820 kubelet[2763]: E1104 20:06:00.647708 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:06:00.648463 kubelet[2763]: E1104 20:06:00.648371 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:06:00.651044 sshd[4760]: Connection closed by 10.0.0.1 port 58334 Nov 4 20:06:00.649617 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:00.654938 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:58334.service: Deactivated successfully. Nov 4 20:06:00.658355 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 20:06:00.661335 systemd-networkd[1495]: calic6ad6c34c56: Link UP Nov 4 20:06:00.662584 systemd-networkd[1495]: calic6ad6c34c56: Gained carrier Nov 4 20:06:00.664693 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Nov 4 20:06:00.669834 systemd-logind[1575]: Removed session 10. Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.576 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--z9g8r-eth0 goldmane-666569f655- calico-system 4d779447-aab7-4044-9468-fe0588e362f2 886 0 2025-11-04 20:05:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-z9g8r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6ad6c34c56 [] [] }} ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.576 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.606 [INFO][4785] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" HandleID="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Workload="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.606 [INFO][4785] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" HandleID="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Workload="localhost-k8s-goldmane--666569f655--z9g8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-z9g8r", "timestamp":"2025-11-04 20:06:00.606719381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.606 [INFO][4785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.606 [INFO][4785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.607 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.618 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.624 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.630 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.632 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.636 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.636 [INFO][4785] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.637 [INFO][4785] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9 Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.641 [INFO][4785] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.649 [INFO][4785] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.649 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" host="localhost" Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.649 [INFO][4785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 20:06:00.680272 containerd[1607]: 2025-11-04 20:06:00.649 [INFO][4785] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" HandleID="k8s-pod-network.f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Workload="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.653 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z9g8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d779447-aab7-4044-9468-fe0588e362f2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-z9g8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6ad6c34c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.653 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.653 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6ad6c34c56 ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.662 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.663 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--z9g8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d779447-aab7-4044-9468-fe0588e362f2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 20, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9", Pod:"goldmane-666569f655-z9g8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6ad6c34c56", MAC:"56:b4:65:62:d3:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 20:06:00.680867 containerd[1607]: 2025-11-04 20:06:00.675 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" Namespace="calico-system" Pod="goldmane-666569f655-z9g8r" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--z9g8r-eth0" Nov 4 20:06:00.717704 containerd[1607]: time="2025-11-04T20:06:00.717611097Z" level=info msg="connecting to shim f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9" address="unix:///run/containerd/s/9384f83d539b5e1ceae28780cbc2607fcee601dc83f10bca9918ed70c93f8dbb" namespace=k8s.io protocol=ttrpc version=3 Nov 4 20:06:00.752244 systemd[1]: Started cri-containerd-f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9.scope - libcontainer container f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9. Nov 4 20:06:00.771929 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 20:06:00.847249 containerd[1607]: time="2025-11-04T20:06:00.847198139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z9g8r,Uid:4d779447-aab7-4044-9468-fe0588e362f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f91f85d53afd5c7165a0ab5d8c5ae0ca06f4bb49657ec6c459e2450d10320dc9\"" Nov 4 20:06:00.849144 containerd[1607]: time="2025-11-04T20:06:00.849102930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 20:06:01.215548 containerd[1607]: time="2025-11-04T20:06:01.215457102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:01.216803 containerd[1607]: time="2025-11-04T20:06:01.216763110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 20:06:01.216993 containerd[1607]: time="2025-11-04T20:06:01.216844893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:01.217088 kubelet[2763]: E1104 20:06:01.217039 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:01.217127 kubelet[2763]: E1104 20:06:01.217102 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:01.217324 kubelet[2763]: E1104 20:06:01.217283 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55sf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z9g8r_calico-system(4d779447-aab7-4044-9468-fe0588e362f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:01.218493 kubelet[2763]: E1104 20:06:01.218437 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:01.511631 kubelet[2763]: I1104 20:06:01.511475 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 20:06:01.512092 kubelet[2763]: E1104 20:06:01.512066 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:01.643743 kubelet[2763]: E1104 20:06:01.643482 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:01.644241 kubelet[2763]: E1104 20:06:01.644086 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:01.646842 kubelet[2763]: E1104 20:06:01.646801 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:01.859205 systemd-networkd[1495]: calic6ad6c34c56: Gained IPv6LL Nov 4 20:06:02.645649 kubelet[2763]: E1104 20:06:02.645606 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:05.661606 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:58340.service - OpenSSH per-connection server daemon (10.0.0.1:58340). Nov 4 20:06:05.722912 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 58340 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:05.725288 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:05.729506 systemd-logind[1575]: New session 11 of user core. Nov 4 20:06:05.736193 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 20:06:05.802873 sshd[4927]: Connection closed by 10.0.0.1 port 58340 Nov 4 20:06:05.803225 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:05.809265 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:58340.service: Deactivated successfully. Nov 4 20:06:05.811515 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 20:06:05.812548 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Nov 4 20:06:05.813980 systemd-logind[1575]: Removed session 11. Nov 4 20:06:10.817334 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Nov 4 20:06:10.866783 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:10.869246 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:10.873927 systemd-logind[1575]: New session 12 of user core. Nov 4 20:06:10.884160 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 20:06:10.990934 sshd[4954]: Connection closed by 10.0.0.1 port 46686 Nov 4 20:06:10.991302 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:11.006225 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:46686.service: Deactivated successfully. Nov 4 20:06:11.008214 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 20:06:11.008995 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Nov 4 20:06:11.012051 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:46694.service - OpenSSH per-connection server daemon (10.0.0.1:46694). Nov 4 20:06:11.012720 systemd-logind[1575]: Removed session 12. Nov 4 20:06:11.079947 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 46694 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:11.082570 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:11.087174 systemd-logind[1575]: New session 13 of user core. Nov 4 20:06:11.094161 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 20:06:11.203946 sshd[4973]: Connection closed by 10.0.0.1 port 46694 Nov 4 20:06:11.205457 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:11.218073 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:46694.service: Deactivated successfully. Nov 4 20:06:11.222755 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 20:06:11.225508 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Nov 4 20:06:11.230735 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:46698.service - OpenSSH per-connection server daemon (10.0.0.1:46698). Nov 4 20:06:11.231754 systemd-logind[1575]: Removed session 13. Nov 4 20:06:11.309499 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 46698 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:11.311943 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:11.316861 systemd-logind[1575]: New session 14 of user core. Nov 4 20:06:11.326146 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 20:06:11.421905 sshd[4988]: Connection closed by 10.0.0.1 port 46698 Nov 4 20:06:11.422272 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:11.428210 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:46698.service: Deactivated successfully. Nov 4 20:06:11.430296 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 20:06:11.431370 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Nov 4 20:06:11.433065 systemd-logind[1575]: Removed session 14. Nov 4 20:06:11.500953 containerd[1607]: time="2025-11-04T20:06:11.500832971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 20:06:11.858554 containerd[1607]: time="2025-11-04T20:06:11.858498476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:11.859636 containerd[1607]: time="2025-11-04T20:06:11.859593188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 20:06:11.859731 containerd[1607]: time="2025-11-04T20:06:11.859691613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:11.859860 kubelet[2763]: E1104 20:06:11.859815 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:06:11.860297 kubelet[2763]: E1104 20:06:11.859869 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:06:11.860297 kubelet[2763]: E1104 20:06:11.860055 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9j78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b4cd748b4-xzkvr_calico-system(45ba7d0b-5883-4d92-9d1d-2bfad2cab22b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:11.861275 kubelet[2763]: E1104 20:06:11.861243 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:06:12.499224 containerd[1607]: time="2025-11-04T20:06:12.499186563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 20:06:12.924696 containerd[1607]: time="2025-11-04T20:06:12.924645810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:12.925790 containerd[1607]: time="2025-11-04T20:06:12.925751814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 20:06:12.925841 containerd[1607]: time="2025-11-04T20:06:12.925807598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:12.926040 kubelet[2763]: E1104 20:06:12.925974 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:06:12.926040 kubelet[2763]: E1104 20:06:12.926036 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:06:12.926371 kubelet[2763]: E1104 20:06:12.926234 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:420df1ec3eda4f75af4e03716b882180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:12.926619 containerd[1607]: time="2025-11-04T20:06:12.926599372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:06:13.269378 containerd[1607]: time="2025-11-04T20:06:13.269225613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:13.270589 containerd[1607]: time="2025-11-04T20:06:13.270516663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:06:13.270768 containerd[1607]: time="2025-11-04T20:06:13.270603817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:13.270848 kubelet[2763]: E1104 20:06:13.270780 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:13.270912 kubelet[2763]: E1104 20:06:13.270853 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:13.271260 containerd[1607]: time="2025-11-04T20:06:13.271215844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 20:06:13.271335 kubelet[2763]: E1104 20:06:13.271232 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l52rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-vc5nj_calico-apiserver(c04ecbad-d5c2-43d0-b16a-235b2a29a278): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:13.272556 kubelet[2763]: E1104 20:06:13.272501 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:06:13.663288 containerd[1607]: time="2025-11-04T20:06:13.663218268Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:13.665077 containerd[1607]: time="2025-11-04T20:06:13.665003486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 20:06:13.665184 containerd[1607]: time="2025-11-04T20:06:13.665037820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:13.665361 kubelet[2763]: E1104 20:06:13.665285 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:06:13.665425 kubelet[2763]: E1104 20:06:13.665358 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:06:13.665571 kubelet[2763]: E1104 20:06:13.665522 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:13.666818 kubelet[2763]: E1104 20:06:13.666763 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f9649fdf-5jqt6" podUID="dd2798b7-1698-43cb-8c9c-5b6836607d10" Nov 4 20:06:14.499942 containerd[1607]: time="2025-11-04T20:06:14.499887052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 20:06:14.831417 containerd[1607]: time="2025-11-04T20:06:14.831281762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:14.832911 containerd[1607]: time="2025-11-04T20:06:14.832851164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 20:06:14.832965 containerd[1607]: time="2025-11-04T20:06:14.832940181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:14.833190 kubelet[2763]: E1104 20:06:14.833135 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:06:14.833569 kubelet[2763]: E1104 20:06:14.833198 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:06:14.833569 kubelet[2763]: E1104 20:06:14.833341 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:14.836048 containerd[1607]: time="2025-11-04T20:06:14.836028371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 20:06:15.175717 containerd[1607]: time="2025-11-04T20:06:15.175652316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:15.176846 containerd[1607]: time="2025-11-04T20:06:15.176806469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 20:06:15.176916 containerd[1607]: time="2025-11-04T20:06:15.176884085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:15.177130 kubelet[2763]: E1104 20:06:15.177079 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:06:15.177199 kubelet[2763]: E1104 20:06:15.177138 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:06:15.177356 kubelet[2763]: E1104 20:06:15.177297 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:15.178533 kubelet[2763]: E1104 20:06:15.178482 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:06:16.446789 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Nov 4 20:06:16.500134 containerd[1607]: time="2025-11-04T20:06:16.500086243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:06:16.518947 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:16.521941 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:16.527157 systemd-logind[1575]: New session 15 of user core. Nov 4 20:06:16.535232 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 20:06:16.613158 sshd[5011]: Connection closed by 10.0.0.1 port 54028 Nov 4 20:06:16.613450 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:16.617809 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:54028.service: Deactivated successfully. Nov 4 20:06:16.619851 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 20:06:16.620664 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Nov 4 20:06:16.621911 systemd-logind[1575]: Removed session 15. Nov 4 20:06:16.864921 containerd[1607]: time="2025-11-04T20:06:16.864846518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:16.866161 containerd[1607]: time="2025-11-04T20:06:16.866121859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:06:16.866288 containerd[1607]: time="2025-11-04T20:06:16.866174508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:16.867026 kubelet[2763]: E1104 20:06:16.866313 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:16.867385 kubelet[2763]: E1104 20:06:16.867096 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:16.867443 kubelet[2763]: E1104 20:06:16.867351 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5bck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-2sm8f_calico-apiserver(a8754e25-0820-405c-8ad6-8e109ea21a48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:16.868648 kubelet[2763]: E1104 20:06:16.868610 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:06:18.500261 containerd[1607]: time="2025-11-04T20:06:18.500207547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 20:06:18.854706 containerd[1607]: time="2025-11-04T20:06:18.854573664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:18.856052 containerd[1607]: time="2025-11-04T20:06:18.855976956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 20:06:18.856114 containerd[1607]: time="2025-11-04T20:06:18.856043681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:18.856291 kubelet[2763]: E1104 20:06:18.856255 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:18.856610 kubelet[2763]: E1104 20:06:18.856307 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:18.856610 kubelet[2763]: E1104 20:06:18.856449 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55sf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z9g8r_calico-system(4d779447-aab7-4044-9468-fe0588e362f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:18.857640 kubelet[2763]: E1104 20:06:18.857602 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:21.642303 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:54030.service - OpenSSH per-connection server daemon (10.0.0.1:54030). Nov 4 20:06:21.702720 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 54030 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:21.704737 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:21.708861 systemd-logind[1575]: New session 16 of user core. Nov 4 20:06:21.715141 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 20:06:21.789726 sshd[5036]: Connection closed by 10.0.0.1 port 54030 Nov 4 20:06:21.790080 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:21.794165 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:54030.service: Deactivated successfully. Nov 4 20:06:21.796161 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 20:06:21.797006 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Nov 4 20:06:21.798044 systemd-logind[1575]: Removed session 16. Nov 4 20:06:25.499762 kubelet[2763]: E1104 20:06:25.499698 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:06:25.500895 kubelet[2763]: E1104 20:06:25.500725 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:06:25.501804 kubelet[2763]: E1104 20:06:25.501744 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f9649fdf-5jqt6" podUID="dd2798b7-1698-43cb-8c9c-5b6836607d10" Nov 4 20:06:26.499003 kubelet[2763]: E1104 20:06:26.498958 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:26.802813 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:35752.service - OpenSSH per-connection server daemon (10.0.0.1:35752). Nov 4 20:06:26.857035 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 35752 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:26.858978 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:26.863771 systemd-logind[1575]: New session 17 of user core. Nov 4 20:06:26.871161 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 20:06:26.950358 sshd[5057]: Connection closed by 10.0.0.1 port 35752 Nov 4 20:06:26.950693 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:26.954801 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:35752.service: Deactivated successfully. Nov 4 20:06:26.956972 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 20:06:26.959641 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Nov 4 20:06:26.960580 systemd-logind[1575]: Removed session 17. Nov 4 20:06:27.499695 kubelet[2763]: E1104 20:06:27.499648 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:06:29.500220 kubelet[2763]: E1104 20:06:29.500144 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:06:30.499317 kubelet[2763]: E1104 20:06:30.499268 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:31.970320 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:35768.service - OpenSSH per-connection server daemon (10.0.0.1:35768). Nov 4 20:06:32.061576 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 35768 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:32.063888 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:32.068489 systemd-logind[1575]: New session 18 of user core. Nov 4 20:06:32.084155 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 20:06:32.165655 sshd[5105]: Connection closed by 10.0.0.1 port 35768 Nov 4 20:06:32.166164 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:32.177210 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:35768.service: Deactivated successfully. Nov 4 20:06:32.179411 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 20:06:32.180444 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Nov 4 20:06:32.183974 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Nov 4 20:06:32.185524 systemd-logind[1575]: Removed session 18. Nov 4 20:06:32.238948 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:32.241095 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:32.245264 systemd-logind[1575]: New session 19 of user core. Nov 4 20:06:32.255146 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 20:06:32.499693 kubelet[2763]: E1104 20:06:32.499532 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:32.517894 sshd[5123]: Connection closed by 10.0.0.1 port 35784 Nov 4 20:06:32.518230 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:32.526756 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:35784.service: Deactivated successfully. Nov 4 20:06:32.528658 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 20:06:32.529361 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Nov 4 20:06:32.532106 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Nov 4 20:06:32.532740 systemd-logind[1575]: Removed session 19. Nov 4 20:06:32.589647 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:32.591652 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:32.596337 systemd-logind[1575]: New session 20 of user core. Nov 4 20:06:32.610205 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 20:06:33.186079 sshd[5138]: Connection closed by 10.0.0.1 port 35790 Nov 4 20:06:33.186580 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:33.199400 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:35790.service: Deactivated successfully. Nov 4 20:06:33.203198 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 20:06:33.207301 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Nov 4 20:06:33.210509 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:35800.service - OpenSSH per-connection server daemon (10.0.0.1:35800). Nov 4 20:06:33.211584 systemd-logind[1575]: Removed session 20. Nov 4 20:06:33.265417 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 35800 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:33.267573 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:33.272008 systemd-logind[1575]: New session 21 of user core. Nov 4 20:06:33.282168 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 20:06:33.441286 sshd[5164]: Connection closed by 10.0.0.1 port 35800 Nov 4 20:06:33.443615 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:33.452933 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:35800.service: Deactivated successfully. Nov 4 20:06:33.455753 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 20:06:33.457392 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Nov 4 20:06:33.460247 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:35816.service - OpenSSH per-connection server daemon (10.0.0.1:35816). Nov 4 20:06:33.461411 systemd-logind[1575]: Removed session 21. Nov 4 20:06:33.516964 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 35816 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:33.518956 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:33.523321 systemd-logind[1575]: New session 22 of user core. Nov 4 20:06:33.531170 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 20:06:33.609520 sshd[5180]: Connection closed by 10.0.0.1 port 35816 Nov 4 20:06:33.609834 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:33.615357 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:35816.service: Deactivated successfully. Nov 4 20:06:33.618073 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 20:06:33.618963 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Nov 4 20:06:33.620492 systemd-logind[1575]: Removed session 22. Nov 4 20:06:34.499276 kubelet[2763]: E1104 20:06:34.499238 2763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 20:06:36.499283 containerd[1607]: time="2025-11-04T20:06:36.499223514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 20:06:36.868256 containerd[1607]: time="2025-11-04T20:06:36.868184266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:36.869681 containerd[1607]: time="2025-11-04T20:06:36.869634053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 20:06:36.869768 containerd[1607]: time="2025-11-04T20:06:36.869673527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:36.869974 kubelet[2763]: E1104 20:06:36.869909 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:06:36.869974 kubelet[2763]: E1104 20:06:36.869977 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 20:06:36.870571 kubelet[2763]: E1104 20:06:36.870226 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:36.870775 containerd[1607]: time="2025-11-04T20:06:36.870742291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 20:06:37.228420 containerd[1607]: time="2025-11-04T20:06:37.228248060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:37.230499 containerd[1607]: time="2025-11-04T20:06:37.230434960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 20:06:37.230624 containerd[1607]: time="2025-11-04T20:06:37.230535909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:37.230836 kubelet[2763]: E1104 20:06:37.230783 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:06:37.230953 kubelet[2763]: E1104 20:06:37.230855 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 20:06:37.231220 kubelet[2763]: E1104 20:06:37.231178 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:420df1ec3eda4f75af4e03716b882180,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:37.231524 containerd[1607]: time="2025-11-04T20:06:37.231473346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 20:06:37.583438 containerd[1607]: time="2025-11-04T20:06:37.583271292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:37.584690 containerd[1607]: time="2025-11-04T20:06:37.584605703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 20:06:37.584766 containerd[1607]: time="2025-11-04T20:06:37.584704057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:37.585008 kubelet[2763]: E1104 20:06:37.584958 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:06:37.585090 kubelet[2763]: E1104 20:06:37.585034 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 20:06:37.585793 kubelet[2763]: E1104 20:06:37.585341 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trjnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lgkc6_calico-system(89d56747-162a-4c55-bf8f-ddfe11dc9e3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:37.585950 containerd[1607]: time="2025-11-04T20:06:37.585419398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 20:06:37.587287 kubelet[2763]: E1104 20:06:37.587235 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:06:37.973010 containerd[1607]: time="2025-11-04T20:06:37.972939778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:37.974365 containerd[1607]: time="2025-11-04T20:06:37.974307532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 20:06:37.974655 kubelet[2763]: E1104 20:06:37.974587 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:06:37.975154 kubelet[2763]: E1104 20:06:37.974662 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 20:06:37.975154 kubelet[2763]: E1104 20:06:37.974871 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65f9649fdf-5jqt6_calico-system(dd2798b7-1698-43cb-8c9c-5b6836607d10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:37.976474 kubelet[2763]: E1104 20:06:37.976424 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65f9649fdf-5jqt6" podUID="dd2798b7-1698-43cb-8c9c-5b6836607d10" Nov 4 20:06:37.981319 containerd[1607]: time="2025-11-04T20:06:37.974349881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:38.499808 containerd[1607]: time="2025-11-04T20:06:38.499758154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:06:38.622963 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:58672.service - OpenSSH per-connection server daemon (10.0.0.1:58672). Nov 4 20:06:38.685369 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 58672 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:38.687413 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:38.692208 systemd-logind[1575]: New session 23 of user core. Nov 4 20:06:38.704229 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 20:06:38.783698 sshd[5206]: Connection closed by 10.0.0.1 port 58672 Nov 4 20:06:38.784269 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:38.790179 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:58672.service: Deactivated successfully. Nov 4 20:06:38.792658 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 20:06:38.794758 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Nov 4 20:06:38.796554 systemd-logind[1575]: Removed session 23. Nov 4 20:06:38.852673 containerd[1607]: time="2025-11-04T20:06:38.852609675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:38.857194 containerd[1607]: time="2025-11-04T20:06:38.857126341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:06:38.857362 containerd[1607]: time="2025-11-04T20:06:38.857198226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:38.857452 kubelet[2763]: E1104 20:06:38.857390 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:38.857452 kubelet[2763]: E1104 20:06:38.857445 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:38.857734 kubelet[2763]: E1104 20:06:38.857591 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l52rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-vc5nj_calico-apiserver(c04ecbad-d5c2-43d0-b16a-235b2a29a278): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:38.859168 kubelet[2763]: E1104 20:06:38.859115 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278" Nov 4 20:06:40.499450 containerd[1607]: time="2025-11-04T20:06:40.499385206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 20:06:40.855621 containerd[1607]: time="2025-11-04T20:06:40.855480535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:40.856738 containerd[1607]: time="2025-11-04T20:06:40.856691926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 20:06:40.856813 containerd[1607]: time="2025-11-04T20:06:40.856746388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:40.856936 kubelet[2763]: E1104 20:06:40.856898 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:06:40.857313 kubelet[2763]: E1104 20:06:40.856951 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 20:06:40.857313 kubelet[2763]: E1104 20:06:40.857123 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9j78q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b4cd748b4-xzkvr_calico-system(45ba7d0b-5883-4d92-9d1d-2bfad2cab22b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:40.858335 kubelet[2763]: E1104 20:06:40.858301 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b4cd748b4-xzkvr" podUID="45ba7d0b-5883-4d92-9d1d-2bfad2cab22b" Nov 4 20:06:43.499557 containerd[1607]: time="2025-11-04T20:06:43.499506086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 20:06:43.801003 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:58678.service - OpenSSH per-connection server daemon (10.0.0.1:58678). Nov 4 20:06:43.855199 containerd[1607]: time="2025-11-04T20:06:43.855150556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:43.856495 containerd[1607]: time="2025-11-04T20:06:43.856459370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 20:06:43.856582 containerd[1607]: time="2025-11-04T20:06:43.856527508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:43.856786 kubelet[2763]: E1104 20:06:43.856724 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:43.856786 kubelet[2763]: E1104 20:06:43.856785 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 20:06:43.857236 kubelet[2763]: E1104 20:06:43.856957 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5bck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bf8445db8-2sm8f_calico-apiserver(a8754e25-0820-405c-8ad6-8e109ea21a48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:43.858153 kubelet[2763]: E1104 20:06:43.858124 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-2sm8f" podUID="a8754e25-0820-405c-8ad6-8e109ea21a48" Nov 4 20:06:43.872617 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 58678 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:43.875106 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:43.879716 systemd-logind[1575]: New session 24 of user core. Nov 4 20:06:43.891167 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 20:06:43.957957 sshd[5225]: Connection closed by 10.0.0.1 port 58678 Nov 4 20:06:43.960037 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:43.963817 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:58678.service: Deactivated successfully. Nov 4 20:06:43.966317 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 20:06:43.967825 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Nov 4 20:06:43.969605 systemd-logind[1575]: Removed session 24. Nov 4 20:06:45.500330 containerd[1607]: time="2025-11-04T20:06:45.500190948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 20:06:45.871777 containerd[1607]: time="2025-11-04T20:06:45.871711965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 20:06:45.872999 containerd[1607]: time="2025-11-04T20:06:45.872951349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 20:06:45.873069 containerd[1607]: time="2025-11-04T20:06:45.872999268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 20:06:45.873260 kubelet[2763]: E1104 20:06:45.873209 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:45.873647 kubelet[2763]: E1104 20:06:45.873266 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 20:06:45.873647 kubelet[2763]: E1104 20:06:45.873411 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55sf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z9g8r_calico-system(4d779447-aab7-4044-9468-fe0588e362f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 20:06:45.874648 kubelet[2763]: E1104 20:06:45.874597 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z9g8r" podUID="4d779447-aab7-4044-9468-fe0588e362f2" Nov 4 20:06:48.500613 kubelet[2763]: E1104 20:06:48.500390 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lgkc6" podUID="89d56747-162a-4c55-bf8f-ddfe11dc9e3a" Nov 4 20:06:48.971247 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:33680.service - OpenSSH per-connection server daemon (10.0.0.1:33680). Nov 4 20:06:49.025812 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 33680 ssh2: RSA SHA256:FD/6wCOEAK2oumu7YKYZjG9k48hMKxx8xD/1LBz1+Eg Nov 4 20:06:49.028364 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 20:06:49.032736 systemd-logind[1575]: New session 25 of user core. Nov 4 20:06:49.041166 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 20:06:49.107632 sshd[5242]: Connection closed by 10.0.0.1 port 33680 Nov 4 20:06:49.108197 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Nov 4 20:06:49.113274 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:33680.service: Deactivated successfully. Nov 4 20:06:49.115470 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 20:06:49.116256 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Nov 4 20:06:49.117446 systemd-logind[1575]: Removed session 25. Nov 4 20:06:50.500038 kubelet[2763]: E1104 20:06:50.499776 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bf8445db8-vc5nj" podUID="c04ecbad-d5c2-43d0-b16a-235b2a29a278"