Nov 4 23:55:57.672576 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:55:57.672617 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:57.672632 kernel: BIOS-provided physical RAM map: Nov 4 23:55:57.672639 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:55:57.672646 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:55:57.672652 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:55:57.672661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 4 23:55:57.672668 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 4 23:55:57.672677 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 23:55:57.672688 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 23:55:57.672695 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:55:57.672702 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:55:57.672708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:55:57.672716 kernel: NX (Execute Disable) protection: active Nov 4 23:55:57.672726 kernel: APIC: Static calls initialized Nov 4 23:55:57.672734 kernel: SMBIOS 2.8 present. Nov 4 23:55:57.672744 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 4 23:55:57.672751 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:55:57.672760 kernel: Hypervisor detected: KVM Nov 4 23:55:57.672769 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 23:55:57.672779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:55:57.672789 kernel: kvm-clock: using sched offset of 4540713271 cycles Nov 4 23:55:57.672800 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:55:57.672817 kernel: tsc: Detected 2794.750 MHz processor Nov 4 23:55:57.672828 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:55:57.672839 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:55:57.672848 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 23:55:57.672859 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:55:57.672870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:55:57.672880 kernel: Using GB pages for direct mapping Nov 4 23:55:57.672890 kernel: ACPI: Early table checksum verification disabled Nov 4 23:55:57.672904 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 4 23:55:57.672915 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.672924 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.672932 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.672940 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 4 23:55:57.672980 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.672988 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.673003 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.673011 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:57.673022 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 4 23:55:57.673030 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 4 23:55:57.673038 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 4 23:55:57.673049 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 4 23:55:57.673057 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 4 23:55:57.673065 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 4 23:55:57.673072 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 4 23:55:57.673080 kernel: No NUMA configuration found Nov 4 23:55:57.673088 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 4 23:55:57.673099 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 4 23:55:57.673107 kernel: Zone ranges: Nov 4 23:55:57.673115 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:55:57.673123 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 4 23:55:57.673131 kernel: Normal empty Nov 4 23:55:57.673139 kernel: Device empty Nov 4 23:55:57.673147 kernel: Movable zone start for each node Nov 4 23:55:57.673155 kernel: Early memory node ranges Nov 4 23:55:57.673168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:55:57.673176 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 4 23:55:57.673184 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 4 23:55:57.673192 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:55:57.673200 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:55:57.673208 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 23:55:57.673218 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:55:57.673229 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:55:57.673237 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:55:57.673245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:55:57.673255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:55:57.673263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:55:57.673279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:55:57.673287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:55:57.673300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:55:57.673309 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:55:57.673319 kernel: TSC deadline timer available Nov 4 23:55:57.673327 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:55:57.673335 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:55:57.673343 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:55:57.673351 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:55:57.673359 kernel: CPU topo: Num. cores per package: 4 Nov 4 23:55:57.673369 kernel: CPU topo: Num. threads per package: 4 Nov 4 23:55:57.673377 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 23:55:57.673385 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:55:57.673393 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 23:55:57.673401 kernel: kvm-guest: setup PV sched yield Nov 4 23:55:57.673409 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 23:55:57.673417 kernel: Booting paravirtualized kernel on KVM Nov 4 23:55:57.673430 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:55:57.673439 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 23:55:57.673447 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 23:55:57.673455 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 23:55:57.673463 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 23:55:57.673470 kernel: kvm-guest: PV spinlocks enabled Nov 4 23:55:57.673478 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 23:55:57.673492 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:57.673501 kernel: random: crng init done Nov 4 23:55:57.673509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 23:55:57.673517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:55:57.673525 kernel: Fallback order for Node 0: 0 Nov 4 23:55:57.673533 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 4 23:55:57.673541 kernel: Policy zone: DMA32 Nov 4 23:55:57.673551 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:55:57.673559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 23:55:57.673567 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:55:57.673575 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:55:57.673583 kernel: Dynamic Preempt: voluntary Nov 4 23:55:57.673591 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:55:57.673600 kernel: rcu: RCU event tracing is enabled. Nov 4 23:55:57.673611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 23:55:57.673619 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:55:57.673629 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:55:57.673637 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:55:57.673645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:55:57.673653 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 23:55:57.673661 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:55:57.673670 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:55:57.673681 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 23:55:57.673690 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 23:55:57.673700 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:55:57.673724 kernel: Console: colour VGA+ 80x25 Nov 4 23:55:57.673736 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:55:57.673747 kernel: ACPI: Core revision 20240827 Nov 4 23:55:57.673758 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:55:57.673769 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:55:57.673779 kernel: x2apic enabled Nov 4 23:55:57.673793 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:55:57.673807 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 23:55:57.673818 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 23:55:57.673829 kernel: kvm-guest: setup PV IPIs Nov 4 23:55:57.673847 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:55:57.673858 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:55:57.673869 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 23:55:57.673880 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:55:57.673891 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 23:55:57.673902 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 23:55:57.673912 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:55:57.673926 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:55:57.673937 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:55:57.673966 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 23:55:57.673978 kernel: active return thunk: retbleed_return_thunk Nov 4 23:55:57.673989 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 23:55:57.673997 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:55:57.674006 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:55:57.674022 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 23:55:57.674031 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 23:55:57.674039 kernel: active return thunk: srso_return_thunk Nov 4 23:55:57.674048 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 23:55:57.674056 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:55:57.674065 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:55:57.674073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:55:57.674084 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:55:57.674092 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:55:57.674101 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:55:57.674109 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:55:57.674118 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:55:57.674127 kernel: landlock: Up and running. Nov 4 23:55:57.674137 kernel: SELinux: Initializing. Nov 4 23:55:57.674154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:55:57.674164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 23:55:57.674175 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 23:55:57.674187 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 23:55:57.674202 kernel: ... version: 0 Nov 4 23:55:57.674215 kernel: ... bit width: 48 Nov 4 23:55:57.674225 kernel: ... generic registers: 6 Nov 4 23:55:57.674246 kernel: ... value mask: 0000ffffffffffff Nov 4 23:55:57.674256 kernel: ... max period: 00007fffffffffff Nov 4 23:55:57.674267 kernel: ... fixed-purpose events: 0 Nov 4 23:55:57.674285 kernel: ... event mask: 000000000000003f Nov 4 23:55:57.674293 kernel: signal: max sigframe size: 1776 Nov 4 23:55:57.674301 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:55:57.674310 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:55:57.674321 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:55:57.674330 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:55:57.674339 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:55:57.674348 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 23:55:57.674356 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 23:55:57.674364 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 23:55:57.674373 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114376K reserved, 0K cma-reserved) Nov 4 23:55:57.674386 kernel: devtmpfs: initialized Nov 4 23:55:57.674395 kernel: x86/mm: Memory block size: 128MB Nov 4 23:55:57.674403 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:55:57.674412 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 23:55:57.674420 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:55:57.674428 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:55:57.674437 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:55:57.674447 kernel: audit: type=2000 audit(1762300554.076:1): state=initialized audit_enabled=0 res=1 Nov 4 23:55:57.674456 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:55:57.674464 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:55:57.674472 kernel: cpuidle: using governor menu Nov 4 23:55:57.674480 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:55:57.674489 kernel: dca service started, version 1.12.1 Nov 4 23:55:57.674497 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 23:55:57.674506 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 23:55:57.674519 kernel: PCI: Using configuration type 1 for base access Nov 4 23:55:57.674527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:55:57.674536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:55:57.674544 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:55:57.674553 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:55:57.674561 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:55:57.674569 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:55:57.674582 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:55:57.674590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:55:57.674599 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:55:57.674607 kernel: ACPI: Interpreter enabled Nov 4 23:55:57.674615 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 23:55:57.674623 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:55:57.674632 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:55:57.674644 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:55:57.674653 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 23:55:57.674661 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:55:57.674940 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:55:57.675149 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 23:55:57.675337 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 23:55:57.675357 kernel: PCI host bridge to bus 0000:00 Nov 4 23:55:57.675538 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:55:57.675698 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:55:57.675972 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:55:57.676152 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 4 23:55:57.676324 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 23:55:57.676504 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 23:55:57.676664 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:55:57.676860 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:55:57.677099 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:55:57.677315 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 23:55:57.677557 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 23:55:57.677734 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 23:55:57.677922 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:55:57.678145 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 23:55:57.678336 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 4 23:55:57.678520 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 23:55:57.678695 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 23:55:57.678907 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 23:55:57.679124 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 4 23:55:57.679313 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 23:55:57.679523 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 23:55:57.679723 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 23:55:57.679899 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 4 23:55:57.680094 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 4 23:55:57.680279 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 4 23:55:57.680457 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 23:55:57.680640 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:55:57.680823 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 23:55:57.681053 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 23:55:57.681252 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 4 23:55:57.681440 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 4 23:55:57.681647 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 23:55:57.681832 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 23:55:57.681844 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:55:57.681853 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:55:57.681862 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:55:57.681871 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:55:57.681880 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 23:55:57.681888 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 23:55:57.681904 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 23:55:57.681913 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 23:55:57.681921 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 23:55:57.681930 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 23:55:57.681939 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 23:55:57.681970 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 23:55:57.681979 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 23:55:57.681990 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 23:55:57.681999 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 23:55:57.682008 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 23:55:57.682016 kernel: iommu: Default domain type: Translated Nov 4 23:55:57.682025 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:55:57.682033 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:55:57.682042 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:55:57.682055 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:55:57.682064 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 4 23:55:57.682241 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 23:55:57.682427 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 23:55:57.682598 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:55:57.682610 kernel: vgaarb: loaded Nov 4 23:55:57.682618 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:55:57.682631 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:55:57.682639 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:55:57.682648 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:55:57.682656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:55:57.682665 kernel: pnp: PnP ACPI init Nov 4 23:55:57.682872 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 23:55:57.682886 kernel: pnp: PnP ACPI: found 6 devices Nov 4 23:55:57.682909 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:55:57.682920 kernel: NET: Registered PF_INET protocol family Nov 4 23:55:57.682932 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 23:55:57.682944 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 23:55:57.682971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:55:57.682991 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:55:57.683009 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 23:55:57.683034 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 23:55:57.683043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:55:57.683052 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 23:55:57.683061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:55:57.683070 kernel: NET: Registered PF_XDP protocol family Nov 4 23:55:57.683246 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:55:57.683424 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:55:57.683595 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:55:57.683754 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 4 23:55:57.683914 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 23:55:57.684091 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 23:55:57.684103 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:55:57.684112 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 23:55:57.684128 kernel: Initialise system trusted keyrings Nov 4 23:55:57.684137 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 23:55:57.684146 kernel: Key type asymmetric registered Nov 4 23:55:57.684154 kernel: Asymmetric key parser 'x509' registered Nov 4 23:55:57.684163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:55:57.684172 kernel: io scheduler mq-deadline registered Nov 4 23:55:57.684180 kernel: io scheduler kyber registered Nov 4 23:55:57.684193 kernel: io scheduler bfq registered Nov 4 23:55:57.684202 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:55:57.684212 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 23:55:57.684220 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 23:55:57.684229 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 23:55:57.684237 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:55:57.684246 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:55:57.684260 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:55:57.684269 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:55:57.684285 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:55:57.684491 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 23:55:57.684505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:55:57.684685 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 23:55:57.684852 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T23:55:55 UTC (1762300555) Nov 4 23:55:57.685049 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 23:55:57.685061 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 23:55:57.685071 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:55:57.685079 kernel: Segment Routing with IPv6 Nov 4 23:55:57.685087 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:55:57.685096 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:55:57.685105 kernel: Key type dns_resolver registered Nov 4 23:55:57.685117 kernel: IPI shorthand broadcast: enabled Nov 4 23:55:57.685126 kernel: sched_clock: Marking stable (1468004418, 227561473)->(1751017247, -55451356) Nov 4 23:55:57.685135 kernel: registered taskstats version 1 Nov 4 23:55:57.685143 kernel: Loading compiled-in X.509 certificates Nov 4 23:55:57.685152 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:55:57.685160 kernel: Demotion targets for Node 0: null Nov 4 23:55:57.685168 kernel: Key type .fscrypt registered Nov 4 23:55:57.685182 kernel: Key type fscrypt-provisioning registered Nov 4 23:55:57.685191 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:55:57.685199 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:55:57.685208 kernel: ima: No architecture policies found Nov 4 23:55:57.685216 kernel: clk: Disabling unused clocks Nov 4 23:55:57.685225 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:55:57.685233 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:55:57.685247 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:55:57.685255 kernel: Run /init as init process Nov 4 23:55:57.685264 kernel: with arguments: Nov 4 23:55:57.685280 kernel: /init Nov 4 23:55:57.685288 kernel: with environment: Nov 4 23:55:57.685296 kernel: HOME=/ Nov 4 23:55:57.685305 kernel: TERM=linux Nov 4 23:55:57.685313 kernel: SCSI subsystem initialized Nov 4 23:55:57.685327 kernel: libata version 3.00 loaded. Nov 4 23:55:57.685505 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 23:55:57.685550 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 23:55:57.685724 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 23:55:57.685896 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 23:55:57.686161 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 23:55:57.686376 kernel: scsi host0: ahci Nov 4 23:55:57.686578 kernel: scsi host1: ahci Nov 4 23:55:57.686766 kernel: scsi host2: ahci Nov 4 23:55:57.686966 kernel: scsi host3: ahci Nov 4 23:55:57.687155 kernel: scsi host4: ahci Nov 4 23:55:57.687358 kernel: scsi host5: ahci Nov 4 23:55:57.687371 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 4 23:55:57.687380 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 4 23:55:57.687390 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 4 23:55:57.687398 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 4 23:55:57.687407 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 4 23:55:57.687420 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 4 23:55:57.687429 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:57.687438 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:57.687447 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:57.687462 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:57.687471 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 23:55:57.687480 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:57.687494 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:55:57.687503 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 23:55:57.687511 kernel: ata3.00: applying bridge limits Nov 4 23:55:57.687520 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 23:55:57.687529 kernel: ata3.00: configured for UDMA/100 Nov 4 23:55:57.687743 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 23:55:57.687934 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 23:55:57.688133 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 23:55:57.688146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:55:57.688155 kernel: GPT:16515071 != 27000831 Nov 4 23:55:57.688164 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:55:57.688172 kernel: GPT:16515071 != 27000831 Nov 4 23:55:57.688181 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:55:57.688197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 23:55:57.688400 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 23:55:57.688413 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:55:57.688602 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:55:57.688615 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:55:57.688624 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:55:57.688633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:55:57.688646 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:55:57.688657 kernel: raid6: avx2x4 gen() 28126 MB/s Nov 4 23:55:57.688665 kernel: raid6: avx2x2 gen() 30335 MB/s Nov 4 23:55:57.688674 kernel: raid6: avx2x1 gen() 21672 MB/s Nov 4 23:55:57.688685 kernel: raid6: using algorithm avx2x2 gen() 30335 MB/s Nov 4 23:55:57.688694 kernel: raid6: .... xor() 18799 MB/s, rmw enabled Nov 4 23:55:57.688703 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:55:57.688712 kernel: xor: automatically using best checksumming function avx Nov 4 23:55:57.688721 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:55:57.688730 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 4 23:55:57.688739 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:55:57.688750 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:57.688759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:55:57.688768 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:55:57.688777 kernel: loop: module loaded Nov 4 23:55:57.688786 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:55:57.688794 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:55:57.688804 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:55:57.688819 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:55:57.688829 systemd[1]: Detected virtualization kvm. Nov 4 23:55:57.688838 systemd[1]: Detected architecture x86-64. Nov 4 23:55:57.688848 systemd[1]: Running in initrd. Nov 4 23:55:57.688857 systemd[1]: No hostname configured, using default hostname. Nov 4 23:55:57.688867 systemd[1]: Hostname set to . Nov 4 23:55:57.688883 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:55:57.688892 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:55:57.688902 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:55:57.688911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:57.688921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:57.688931 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:55:57.688941 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:55:57.688972 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:55:57.688995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:55:57.689005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:57.689015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:57.689025 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:55:57.689038 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:55:57.689048 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:55:57.689057 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:55:57.689067 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:55:57.689076 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:55:57.689085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:55:57.689095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:55:57.689107 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:55:57.689121 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:57.689131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:57.689141 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:57.689151 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:55:57.689160 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:55:57.689170 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:55:57.689182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:55:57.689192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:55:57.689201 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:55:57.689212 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:55:57.689221 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:55:57.689231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:55:57.689243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:57.689253 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:55:57.689262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:57.689318 systemd-journald[317]: Collecting audit messages is disabled. Nov 4 23:55:57.689345 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:55:57.689356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:55:57.689366 systemd-journald[317]: Journal started Nov 4 23:55:57.689388 systemd-journald[317]: Runtime Journal (/run/log/journal/681de4f0e95848de820323f2627a4070) is 6M, max 48.3M, 42.2M free. Nov 4 23:55:57.695966 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:55:57.703198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:55:57.717976 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:55:57.720762 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 4 23:55:57.790695 kernel: Bridge firewalling registered Nov 4 23:55:57.722346 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:55:57.730278 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:55:57.791708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:57.795444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:57.801589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:57.806187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:55:57.811196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:55:57.815833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:55:57.829239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:57.833788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:57.836488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:55:57.852137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:55:57.856005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:55:57.890784 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:57.920482 systemd-resolved[350]: Positive Trust Anchors: Nov 4 23:55:57.920510 systemd-resolved[350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:55:57.920520 systemd-resolved[350]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:55:57.920575 systemd-resolved[350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:55:57.961109 systemd-resolved[350]: Defaulting to hostname 'linux'. Nov 4 23:55:57.962751 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:55:57.963080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:58.068987 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:55:58.085981 kernel: iscsi: registered transport (tcp) Nov 4 23:55:58.118309 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:55:58.118361 kernel: QLogic iSCSI HBA Driver Nov 4 23:55:58.153647 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:55:58.173939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:58.200763 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:55:58.268499 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:55:58.270407 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:55:58.275599 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:55:58.330377 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:55:58.334614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:58.371298 systemd-udevd[602]: Using default interface naming scheme 'v257'. Nov 4 23:55:58.392509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:58.399881 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:55:58.434668 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Nov 4 23:55:58.444997 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:55:58.447539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:55:58.478680 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:55:58.482969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:55:58.588132 systemd-networkd[713]: lo: Link UP Nov 4 23:55:58.588518 systemd-networkd[713]: lo: Gained carrier Nov 4 23:55:58.589356 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:55:58.591895 systemd[1]: Reached target network.target - Network. Nov 4 23:55:58.613332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:58.615564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:55:58.698217 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 23:55:58.735540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 23:55:58.751042 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 23:55:58.755336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:55:58.800201 disk-uuid[769]: Primary Header is updated. Nov 4 23:55:58.800201 disk-uuid[769]: Secondary Entries is updated. Nov 4 23:55:58.800201 disk-uuid[769]: Secondary Header is updated. Nov 4 23:55:58.800215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:55:58.862294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:58.866279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:58.872330 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:55:58.868509 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:58.877970 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 23:55:58.881787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:58.908989 kernel: AES CTR mode by8 optimization enabled Nov 4 23:55:58.947603 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:58.947614 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:58.948073 systemd-networkd[713]: eth0: Link UP Nov 4 23:55:58.948497 systemd-networkd[713]: eth0: Gained carrier Nov 4 23:55:58.948507 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:58.963021 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:55:58.998185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:55:59.026837 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:55:59.027424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:59.027963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:55:59.029929 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:55:59.059447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:59.075608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:55:59.274268 systemd-resolved[350]: Detected conflict on linux IN A 10.0.0.112 Nov 4 23:55:59.274290 systemd-resolved[350]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Nov 4 23:55:59.873290 disk-uuid[771]: Warning: The kernel is still using the old partition table. Nov 4 23:55:59.873290 disk-uuid[771]: The new table will be used at the next reboot or after you Nov 4 23:55:59.873290 disk-uuid[771]: run partprobe(8) or kpartx(8) Nov 4 23:55:59.873290 disk-uuid[771]: The operation has completed successfully. Nov 4 23:55:59.887894 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:55:59.888101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:55:59.893310 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:55:59.930003 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Nov 4 23:55:59.933612 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:59.933649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:59.937730 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:55:59.937766 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:55:59.945969 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:59.947223 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:55:59.951735 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:56:00.165335 systemd-networkd[713]: eth0: Gained IPv6LL Nov 4 23:56:00.226516 ignition[886]: Ignition 2.22.0 Nov 4 23:56:00.226532 ignition[886]: Stage: fetch-offline Nov 4 23:56:00.226579 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:00.226591 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:00.226723 ignition[886]: parsed url from cmdline: "" Nov 4 23:56:00.226728 ignition[886]: no config URL provided Nov 4 23:56:00.226738 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:56:00.226751 ignition[886]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:56:00.226808 ignition[886]: op(1): [started] loading QEMU firmware config module Nov 4 23:56:00.226813 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 23:56:00.240224 ignition[886]: op(1): [finished] loading QEMU firmware config module Nov 4 23:56:00.240249 ignition[886]: QEMU firmware config was not found. Ignoring... Nov 4 23:56:00.322152 ignition[886]: parsing config with SHA512: 67f2ba7c9588f59f15106963f08a183ccf146d892bf1be093b59a958d3b44a04b74cedcdc5f6eb580159bbf2750572a46c225203d7529f24ebf74559c5d3decf Nov 4 23:56:00.365976 unknown[886]: fetched base config from "system" Nov 4 23:56:00.366020 unknown[886]: fetched user config from "qemu" Nov 4 23:56:00.366470 ignition[886]: fetch-offline: fetch-offline passed Nov 4 23:56:00.366569 ignition[886]: Ignition finished successfully Nov 4 23:56:00.373875 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:56:00.374179 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 23:56:00.377708 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:56:00.539718 ignition[897]: Ignition 2.22.0 Nov 4 23:56:00.539739 ignition[897]: Stage: kargs Nov 4 23:56:00.539982 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:00.539996 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:00.566382 ignition[897]: kargs: kargs passed Nov 4 23:56:00.566460 ignition[897]: Ignition finished successfully Nov 4 23:56:00.571763 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:56:00.576405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:56:00.617325 ignition[905]: Ignition 2.22.0 Nov 4 23:56:00.617340 ignition[905]: Stage: disks Nov 4 23:56:00.617571 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:00.617585 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:00.646692 ignition[905]: disks: disks passed Nov 4 23:56:00.650047 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:56:00.646799 ignition[905]: Ignition finished successfully Nov 4 23:56:00.663862 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:56:00.664852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:56:00.670388 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:56:00.670729 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:56:00.674776 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:56:00.683021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:56:00.750732 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 23:56:01.062751 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:56:01.066248 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:56:01.278979 kernel: EXT4-fs (vda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:56:01.279341 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:56:01.281415 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:56:01.286126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:56:01.302503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:56:01.304776 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 23:56:01.304831 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:56:01.304864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:56:01.336032 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:56:01.337824 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:56:01.400446 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Nov 4 23:56:01.404213 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:56:01.404307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:56:01.408584 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:56:01.408643 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:56:01.410559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:56:01.422888 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:56:01.428542 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:56:01.435505 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:56:01.441600 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:56:01.569253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:56:01.573280 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:56:01.577563 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:56:01.607097 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:56:01.609351 kernel: BTRFS info (device vda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:56:01.628115 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:56:01.713817 ignition[1037]: INFO : Ignition 2.22.0 Nov 4 23:56:01.713817 ignition[1037]: INFO : Stage: mount Nov 4 23:56:01.716811 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:01.716811 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:01.716811 ignition[1037]: INFO : mount: mount passed Nov 4 23:56:01.716811 ignition[1037]: INFO : Ignition finished successfully Nov 4 23:56:01.717842 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:56:01.721357 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:56:02.281163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:56:02.319347 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1049) Nov 4 23:56:02.319390 kernel: BTRFS info (device vda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:56:02.319402 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:56:02.324672 kernel: BTRFS info (device vda6): turning on async discard Nov 4 23:56:02.324696 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 23:56:02.326718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:56:02.372441 ignition[1066]: INFO : Ignition 2.22.0 Nov 4 23:56:02.372441 ignition[1066]: INFO : Stage: files Nov 4 23:56:02.375395 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:02.375395 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:02.375395 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:56:02.381707 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:56:02.381707 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:56:02.390792 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:56:02.393521 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:56:02.396441 unknown[1066]: wrote ssh authorized keys file for user: core Nov 4 23:56:02.398198 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:56:02.400577 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:56:02.400577 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:56:02.462399 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:56:02.767777 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:56:02.767777 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:56:02.776207 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:56:02.779553 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:56:02.782653 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:56:02.786698 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:56:02.789874 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:56:02.792755 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:56:02.795773 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:56:02.882152 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:56:02.885601 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:56:02.885601 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:56:03.122895 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:56:03.122895 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:56:03.139930 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:56:03.584148 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:56:04.360482 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:56:04.360482 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:56:04.366578 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:56:04.840664 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:56:04.840664 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:56:04.840664 ignition[1066]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 23:56:04.840664 ignition[1066]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:56:04.877176 ignition[1066]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 23:56:04.877176 ignition[1066]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 23:56:04.877176 ignition[1066]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 23:56:04.894042 ignition[1066]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:56:04.911179 ignition[1066]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:56:04.914010 ignition[1066]: INFO : files: files passed Nov 4 23:56:04.914010 ignition[1066]: INFO : Ignition finished successfully Nov 4 23:56:04.918169 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:56:04.920856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:56:04.940742 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:56:04.972434 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:56:04.972580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:56:04.984719 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 23:56:04.989636 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:56:04.989636 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:56:04.996305 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:56:05.000474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:56:05.000794 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:56:05.008464 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:56:05.091798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:56:05.091939 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:56:05.093729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:56:05.097523 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:56:05.101314 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:56:05.102311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:56:05.144250 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:56:05.146079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:56:05.177532 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:56:05.177681 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:56:05.211480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:56:05.211700 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:56:05.215397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:56:05.215547 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:56:05.221837 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:56:05.225582 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:56:05.228720 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:56:05.232002 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:56:05.233995 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:56:05.234961 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:56:05.243034 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:56:05.244997 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:56:05.248476 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:56:05.252740 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:56:05.257889 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:56:05.261314 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:56:05.261493 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:56:05.267985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:56:05.269897 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:56:05.271847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:56:05.277265 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:56:05.281748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:56:05.281969 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:56:05.287090 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:56:05.287267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:56:05.289144 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:56:05.292694 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:56:05.349125 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:56:05.353874 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:56:05.355495 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:56:05.355901 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:56:05.356042 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:56:05.361189 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:56:05.361280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:56:05.364253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:56:05.364390 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:56:05.367711 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:56:05.367826 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:56:05.374236 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:56:05.377077 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:56:05.379325 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:56:05.379518 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:56:05.379990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:56:05.380114 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:56:05.438622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:56:05.438847 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:56:05.464180 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:56:05.464309 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:56:05.480144 ignition[1123]: INFO : Ignition 2.22.0 Nov 4 23:56:05.480144 ignition[1123]: INFO : Stage: umount Nov 4 23:56:05.482824 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:56:05.482824 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 23:56:05.482824 ignition[1123]: INFO : umount: umount passed Nov 4 23:56:05.482824 ignition[1123]: INFO : Ignition finished successfully Nov 4 23:56:05.484845 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:56:05.485034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:56:05.489211 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:56:05.489714 systemd[1]: Stopped target network.target - Network. Nov 4 23:56:05.492559 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:56:05.492620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:56:05.496064 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:56:05.496122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:56:05.499168 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:56:05.499226 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:56:05.500707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:56:05.500760 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:56:05.503940 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:56:05.507526 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:56:05.549235 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:56:05.549401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:56:05.555843 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:56:05.556094 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:56:05.562437 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:56:05.596509 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:56:05.596602 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:56:05.603714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:56:05.615788 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:56:05.615870 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:56:05.618346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:56:05.618416 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:56:05.623916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:56:05.624001 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:56:05.626383 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:56:05.632642 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:56:05.635515 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:56:05.639663 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:56:05.639774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:56:05.651062 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:56:05.651302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:56:05.656998 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:56:05.657185 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:56:05.661067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:56:05.661123 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:56:05.663125 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:56:05.663227 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:56:05.671283 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:56:05.671366 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:56:05.677769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:56:05.677854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:56:05.684655 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:56:05.686429 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:56:05.686516 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:56:05.690341 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:56:05.690410 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:56:05.692359 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 23:56:05.692420 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:56:05.698248 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:56:05.698316 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:56:05.702327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:56:05.702394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:56:05.721643 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:56:05.727294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:56:05.732863 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:56:05.733068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:56:05.736571 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:56:05.741648 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:56:05.767912 systemd[1]: Switching root. Nov 4 23:56:05.814734 systemd-journald[317]: Journal stopped Nov 4 23:56:07.598024 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 4 23:56:07.598115 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:56:07.598135 kernel: SELinux: policy capability open_perms=1 Nov 4 23:56:07.598151 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:56:07.598173 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:56:07.598205 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:56:07.598222 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:56:07.598238 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:56:07.598256 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:56:07.598272 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:56:07.598294 kernel: audit: type=1403 audit(1762300566.508:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:56:07.598318 systemd[1]: Successfully loaded SELinux policy in 78.026ms. Nov 4 23:56:07.598348 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.224ms. Nov 4 23:56:07.598371 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:56:07.598395 systemd[1]: Detected virtualization kvm. Nov 4 23:56:07.598411 systemd[1]: Detected architecture x86-64. Nov 4 23:56:07.598427 systemd[1]: Detected first boot. Nov 4 23:56:07.598444 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:56:07.598460 zram_generator::config[1168]: No configuration found. Nov 4 23:56:07.598480 kernel: Guest personality initialized and is inactive Nov 4 23:56:07.598496 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:56:07.598512 kernel: Initialized host personality Nov 4 23:56:07.598528 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:56:07.598544 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:56:07.598561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:56:07.598577 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:56:07.598601 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:56:07.598619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:56:07.598635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:56:07.598655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:56:07.598673 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:56:07.598690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:56:07.598708 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:56:07.598728 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:56:07.598744 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:56:07.598760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:56:07.598778 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:56:07.598794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:56:07.598811 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:56:07.598828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:56:07.598852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:56:07.598868 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:56:07.598885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:56:07.598903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:56:07.598919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:56:07.598935 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:56:07.599062 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:56:07.599080 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:56:07.599097 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:56:07.599114 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:56:07.599131 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:56:07.599149 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:56:07.599165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:56:07.599186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:56:07.599203 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:56:07.599220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:56:07.599237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:56:07.599254 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:56:07.599270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:56:07.599287 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:56:07.599311 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:56:07.599328 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:56:07.599358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:07.599378 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:56:07.599395 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:56:07.599412 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:56:07.599429 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:56:07.599453 systemd[1]: Reached target machines.target - Containers. Nov 4 23:56:07.599470 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:56:07.599487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:56:07.599504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:56:07.599520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:56:07.599538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:56:07.599554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:56:07.599577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:56:07.599594 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:56:07.599610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:56:07.599627 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:56:07.599644 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:56:07.599661 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:56:07.599681 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:56:07.599697 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:56:07.599715 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:56:07.599732 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:56:07.599749 kernel: fuse: init (API version 7.41) Nov 4 23:56:07.599766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:56:07.599783 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:56:07.599806 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:56:07.599823 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:56:07.599839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:56:07.599857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:07.599879 kernel: ACPI: bus type drm_connector registered Nov 4 23:56:07.599895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:56:07.599911 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:56:07.599928 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:56:07.599967 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:56:07.599986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:56:07.600003 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:56:07.600053 systemd-journald[1253]: Collecting audit messages is disabled. Nov 4 23:56:07.600091 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:56:07.600108 systemd-journald[1253]: Journal started Nov 4 23:56:07.600137 systemd-journald[1253]: Runtime Journal (/run/log/journal/681de4f0e95848de820323f2627a4070) is 6M, max 48.3M, 42.2M free. Nov 4 23:56:07.232685 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:56:07.246662 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 23:56:07.247320 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:56:07.603984 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:56:07.607410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:56:07.610127 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:56:07.610443 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:56:07.613022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:56:07.613261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:56:07.615799 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:56:07.616142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:56:07.618353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:56:07.618636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:56:07.621062 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:56:07.621294 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:56:07.623650 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:56:07.623874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:56:07.626437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:56:07.628809 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:56:07.632060 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:56:07.634662 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:56:07.657051 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:56:07.659767 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:56:07.662058 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:56:07.662093 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:56:07.665107 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:56:07.667576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:56:07.669250 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:56:07.672513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:56:07.674629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:56:07.685713 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:56:07.688284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:56:07.691489 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:56:07.692050 systemd-journald[1253]: Time spent on flushing to /var/log/journal/681de4f0e95848de820323f2627a4070 is 14.403ms for 965 entries. Nov 4 23:56:07.692050 systemd-journald[1253]: System Journal (/var/log/journal/681de4f0e95848de820323f2627a4070) is 8M, max 163.5M, 155.5M free. Nov 4 23:56:07.718115 systemd-journald[1253]: Received client request to flush runtime journal. Nov 4 23:56:07.696459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:56:07.702158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:56:07.706445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:56:07.714038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:56:07.717401 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:56:07.721770 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:56:07.724628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:56:07.726979 kernel: loop1: detected capacity change from 0 to 229808 Nov 4 23:56:07.746674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:56:07.751551 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 4 23:56:07.751578 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Nov 4 23:56:07.759242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:56:07.762994 kernel: loop2: detected capacity change from 0 to 128048 Nov 4 23:56:07.766433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:56:07.770475 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:56:07.808003 kernel: loop3: detected capacity change from 0 to 110984 Nov 4 23:56:07.820303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:56:07.828755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:56:07.835575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:56:07.854314 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:56:07.860993 kernel: loop4: detected capacity change from 0 to 229808 Nov 4 23:56:07.871200 kernel: loop5: detected capacity change from 0 to 128048 Nov 4 23:56:07.871752 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 4 23:56:07.871781 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 4 23:56:07.880511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:56:07.884985 kernel: loop6: detected capacity change from 0 to 110984 Nov 4 23:56:07.905300 (sd-merge)[1309]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 23:56:07.908060 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:56:07.913116 (sd-merge)[1309]: Merged extensions into '/usr'. Nov 4 23:56:07.920801 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:56:07.920821 systemd[1]: Reloading... Nov 4 23:56:07.999798 systemd-resolved[1306]: Positive Trust Anchors: Nov 4 23:56:07.999826 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:56:07.999832 systemd-resolved[1306]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:56:07.999874 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:56:08.075016 zram_generator::config[1375]: No configuration found. Nov 4 23:56:08.078765 systemd-resolved[1306]: Defaulting to hostname 'linux'. Nov 4 23:56:08.276938 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:56:08.277355 systemd[1]: Reloading finished in 356 ms. Nov 4 23:56:08.314301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:56:08.316832 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:56:08.322484 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:56:08.326876 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:56:08.330470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:56:08.351790 systemd[1]: Starting ensure-sysext.service... Nov 4 23:56:08.407631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:56:08.422193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:56:08.424668 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:56:08.429732 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:56:08.429776 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:56:08.430147 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:56:08.430509 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:56:08.431574 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:56:08.431856 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 23:56:08.431977 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 23:56:08.440555 systemd[1]: Reload requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:56:08.440578 systemd[1]: Reloading... Nov 4 23:56:08.480624 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:56:08.480643 systemd-tmpfiles[1384]: Skipping /boot Nov 4 23:56:08.498269 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:56:08.498462 systemd-tmpfiles[1384]: Skipping /boot Nov 4 23:56:08.519979 zram_generator::config[1416]: No configuration found. Nov 4 23:56:08.890301 systemd[1]: Reloading finished in 449 ms. Nov 4 23:56:08.905161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:56:08.933543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:56:08.945043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:56:08.947861 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:56:08.951269 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:56:08.966393 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:56:08.970229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:56:08.975183 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:56:08.980479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:08.981811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:56:09.022575 augenrules[1480]: No rules Nov 4 23:56:09.145576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:56:09.151230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:56:09.156030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:56:09.158229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:56:09.158367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:56:09.158489 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:09.160174 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:56:09.160496 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:56:09.165408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:56:09.165671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:56:09.169398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:56:09.169749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:56:09.185167 systemd-udevd[1459]: Using default interface naming scheme 'v257'. Nov 4 23:56:09.187628 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:56:09.187882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:56:09.190917 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:56:09.197916 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:56:09.204134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:09.207255 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:56:09.209424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:56:09.211065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:56:09.217159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:56:09.221304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:56:09.229472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:56:09.231681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:56:09.231720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:56:09.235273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:56:09.235754 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:56:09.239640 systemd[1]: Finished ensure-sysext.service. Nov 4 23:56:09.241788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:56:09.242115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:56:09.246704 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:56:09.247324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:56:09.250230 augenrules[1494]: /sbin/augenrules: No change Nov 4 23:56:09.252974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:56:09.253791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:56:09.256584 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:56:09.257069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:56:09.270147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:56:09.271977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:56:09.272046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:56:09.272226 augenrules[1528]: No rules Nov 4 23:56:09.273839 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:56:09.276131 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:56:09.276387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:56:09.306354 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:56:09.308911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:56:09.346649 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:56:09.381836 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:56:09.388527 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:56:09.399417 systemd-networkd[1533]: lo: Link UP Nov 4 23:56:09.399430 systemd-networkd[1533]: lo: Gained carrier Nov 4 23:56:09.401806 systemd-networkd[1533]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:56:09.401819 systemd-networkd[1533]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:56:09.402722 systemd-networkd[1533]: eth0: Link UP Nov 4 23:56:09.403038 systemd-networkd[1533]: eth0: Gained carrier Nov 4 23:56:09.403052 systemd-networkd[1533]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:56:09.403971 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:56:09.404860 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:56:09.407522 systemd[1]: Reached target network.target - Network. Nov 4 23:56:09.410907 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:56:09.414392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:56:09.423022 systemd-networkd[1533]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 23:56:09.423856 systemd-timesyncd[1537]: Network configuration changed, trying to establish connection. Nov 4 23:56:09.424884 systemd-timesyncd[1537]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 23:56:09.424973 systemd-timesyncd[1537]: Initial clock synchronization to Tue 2025-11-04 23:56:09.569343 UTC. Nov 4 23:56:09.444830 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:56:09.449978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 23:56:09.452331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 23:56:09.456267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:56:09.461978 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:56:09.482542 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 23:56:09.482924 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:56:09.488920 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:56:09.584325 ldconfig[1457]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:56:09.599912 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:56:09.609230 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:56:09.631286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:56:09.682760 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:56:09.701659 kernel: kvm_amd: TSC scaling supported Nov 4 23:56:09.701792 kernel: kvm_amd: Nested Virtualization enabled Nov 4 23:56:09.701813 kernel: kvm_amd: Nested Paging enabled Nov 4 23:56:09.702553 kernel: kvm_amd: LBR virtualization supported Nov 4 23:56:09.703619 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 23:56:09.704828 kernel: kvm_amd: Virtual GIF supported Nov 4 23:56:09.735975 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:56:09.824784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:56:09.829034 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:56:09.831056 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:56:09.833215 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:56:09.835384 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:56:09.837594 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:56:09.839701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:56:09.841941 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:56:09.844210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:56:09.844278 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:56:09.845965 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:56:09.848628 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:56:09.853059 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:56:09.857903 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:56:09.860544 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:56:09.863033 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:56:09.870272 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:56:09.872868 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:56:09.876134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:56:09.879385 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:56:09.881160 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:56:09.883028 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:56:09.883077 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:56:09.885059 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:56:09.888743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:56:09.891637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:56:09.911666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:56:09.915813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:56:09.917656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:56:09.918972 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:56:09.923594 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:56:09.939980 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:56:09.944814 jq[1599]: false Nov 4 23:56:09.945306 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:56:09.954129 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Nov 4 23:56:09.953996 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Nov 4 23:56:09.954371 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:56:09.959730 extend-filesystems[1600]: Found /dev/vda6 Nov 4 23:56:09.961913 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:56:09.964015 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:56:09.964761 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:56:09.966054 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:56:09.966698 extend-filesystems[1600]: Found /dev/vda9 Nov 4 23:56:09.969285 extend-filesystems[1600]: Checking size of /dev/vda9 Nov 4 23:56:09.974866 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Nov 4 23:56:09.974866 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:56:09.974866 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Nov 4 23:56:09.974304 oslogin_cache_refresh[1601]: Failure getting users, quitting Nov 4 23:56:09.974335 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:56:09.974403 oslogin_cache_refresh[1601]: Refreshing group entry cache Nov 4 23:56:09.975195 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:56:09.979097 extend-filesystems[1600]: Resized partition /dev/vda9 Nov 4 23:56:09.983687 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:56:09.985305 extend-filesystems[1625]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:56:09.996756 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 23:56:09.996837 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Nov 4 23:56:09.996837 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:56:09.987674 oslogin_cache_refresh[1601]: Failure getting groups, quitting Nov 4 23:56:09.986833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:56:09.987693 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:56:09.987209 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:56:10.000016 update_engine[1616]: I20251104 23:56:09.997001 1616 main.cc:92] Flatcar Update Engine starting Nov 4 23:56:09.987557 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:56:09.988194 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:56:09.997462 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:56:09.998196 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:56:10.003601 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:56:10.003878 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:56:10.004426 jq[1619]: true Nov 4 23:56:10.023427 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 23:56:10.028200 (ntainerd)[1636]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:56:10.030151 jq[1635]: true Nov 4 23:56:10.064491 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 23:56:10.064491 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 23:56:10.064491 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 23:56:10.075351 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Nov 4 23:56:10.082131 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:56:10.082468 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:56:10.085160 tar[1633]: linux-amd64/LICENSE Nov 4 23:56:10.085702 tar[1633]: linux-amd64/helm Nov 4 23:56:10.209164 systemd-logind[1614]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 23:56:10.209614 dbus-daemon[1597]: [system] SELinux support is enabled Nov 4 23:56:10.209886 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:56:10.216979 update_engine[1616]: I20251104 23:56:10.213372 1616 update_check_scheduler.cc:74] Next update check in 11m27s Nov 4 23:56:10.214058 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:56:10.214355 systemd-logind[1614]: New seat seat0. Nov 4 23:56:10.214833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:56:10.214857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:56:10.217590 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:56:10.217619 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:56:10.220226 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:56:10.224711 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:56:10.229186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:56:10.417265 sshd_keygen[1634]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:56:10.463400 bash[1667]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:56:10.463408 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:56:10.469163 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:56:10.472523 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 23:56:10.482376 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:56:10.487344 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:56:10.517676 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:56:10.519195 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:56:10.524323 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:56:10.563569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:56:10.570171 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:56:10.576252 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:56:10.579656 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:56:10.708195 systemd-networkd[1533]: eth0: Gained IPv6LL Nov 4 23:56:10.715078 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:56:10.717838 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:56:10.721654 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 23:56:10.727208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:10.731067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:56:10.798727 tar[1633]: linux-amd64/README.md Nov 4 23:56:10.815313 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:56:10.831751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:56:10.833891 containerd[1636]: time="2025-11-04T23:56:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:56:10.835079 containerd[1636]: time="2025-11-04T23:56:10.835017113Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:56:10.854261 containerd[1636]: time="2025-11-04T23:56:10.854190551Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.246µs" Nov 4 23:56:10.854261 containerd[1636]: time="2025-11-04T23:56:10.854232979Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:56:10.854261 containerd[1636]: time="2025-11-04T23:56:10.854253107Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:56:10.854533 containerd[1636]: time="2025-11-04T23:56:10.854477948Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:56:10.854533 containerd[1636]: time="2025-11-04T23:56:10.854510460Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:56:10.854621 containerd[1636]: time="2025-11-04T23:56:10.854537947Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:56:10.854715 containerd[1636]: time="2025-11-04T23:56:10.854661057Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:56:10.854715 containerd[1636]: time="2025-11-04T23:56:10.854680023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855062 containerd[1636]: time="2025-11-04T23:56:10.855009403Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855062 containerd[1636]: time="2025-11-04T23:56:10.855024021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855062 containerd[1636]: time="2025-11-04T23:56:10.855035192Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855062 containerd[1636]: time="2025-11-04T23:56:10.855043582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855202 containerd[1636]: time="2025-11-04T23:56:10.855179643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855480 containerd[1636]: time="2025-11-04T23:56:10.855452261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855515 containerd[1636]: time="2025-11-04T23:56:10.855486855Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:56:10.855515 containerd[1636]: time="2025-11-04T23:56:10.855506952Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:56:10.855592 containerd[1636]: time="2025-11-04T23:56:10.855569852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:56:10.856046 containerd[1636]: time="2025-11-04T23:56:10.855969310Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:56:10.856080 containerd[1636]: time="2025-11-04T23:56:10.856060728Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:56:10.857885 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 23:56:10.858287 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 23:56:10.862012 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:56:10.864776 containerd[1636]: time="2025-11-04T23:56:10.864709740Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:56:10.864830 containerd[1636]: time="2025-11-04T23:56:10.864776260Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:56:10.864830 containerd[1636]: time="2025-11-04T23:56:10.864797448Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:56:10.864885 containerd[1636]: time="2025-11-04T23:56:10.864868295Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:56:10.864926 containerd[1636]: time="2025-11-04T23:56:10.864888068Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:56:10.864926 containerd[1636]: time="2025-11-04T23:56:10.864901998Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:56:10.864926 containerd[1636]: time="2025-11-04T23:56:10.864918638Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:56:10.865047 containerd[1636]: time="2025-11-04T23:56:10.864939555Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:56:10.865047 containerd[1636]: time="2025-11-04T23:56:10.864977060Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:56:10.865047 containerd[1636]: time="2025-11-04T23:56:10.864990697Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:56:10.865047 containerd[1636]: time="2025-11-04T23:56:10.865002243Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:56:10.865047 containerd[1636]: time="2025-11-04T23:56:10.865018488Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:56:10.865282 containerd[1636]: time="2025-11-04T23:56:10.865244673Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:56:10.865340 containerd[1636]: time="2025-11-04T23:56:10.865282694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:56:10.865340 containerd[1636]: time="2025-11-04T23:56:10.865302731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:56:10.865340 containerd[1636]: time="2025-11-04T23:56:10.865320129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:56:10.865340 containerd[1636]: time="2025-11-04T23:56:10.865333675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:56:10.865454 containerd[1636]: time="2025-11-04T23:56:10.865346393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:56:10.865454 containerd[1636]: time="2025-11-04T23:56:10.865361516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:56:10.865454 containerd[1636]: time="2025-11-04T23:56:10.865383019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:56:10.865454 containerd[1636]: time="2025-11-04T23:56:10.865415035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:56:10.865454 containerd[1636]: time="2025-11-04T23:56:10.865444686Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:56:10.865799 containerd[1636]: time="2025-11-04T23:56:10.865460718Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:56:10.865799 containerd[1636]: time="2025-11-04T23:56:10.865564268Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:56:10.865799 containerd[1636]: time="2025-11-04T23:56:10.865595860Z" level=info msg="Start snapshots syncer" Nov 4 23:56:10.865799 containerd[1636]: time="2025-11-04T23:56:10.865625045Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:56:10.866107 containerd[1636]: time="2025-11-04T23:56:10.866055175Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:56:10.866278 containerd[1636]: time="2025-11-04T23:56:10.866132693Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:56:10.866278 containerd[1636]: time="2025-11-04T23:56:10.866241267Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:56:10.866420 containerd[1636]: time="2025-11-04T23:56:10.866384727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:56:10.866472 containerd[1636]: time="2025-11-04T23:56:10.866433910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:56:10.866472 containerd[1636]: time="2025-11-04T23:56:10.866451773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:56:10.866472 containerd[1636]: time="2025-11-04T23:56:10.866466431Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:56:10.866553 containerd[1636]: time="2025-11-04T23:56:10.866484851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:56:10.866553 containerd[1636]: time="2025-11-04T23:56:10.866499964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:56:10.866553 containerd[1636]: time="2025-11-04T23:56:10.866514511Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:56:10.866553 containerd[1636]: time="2025-11-04T23:56:10.866546325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:56:10.866686 containerd[1636]: time="2025-11-04T23:56:10.866563632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:56:10.866686 containerd[1636]: time="2025-11-04T23:56:10.866580605Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:56:10.866686 containerd[1636]: time="2025-11-04T23:56:10.866669537Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866691596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866704030Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866717566Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866727928Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866739756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:56:10.866779 containerd[1636]: time="2025-11-04T23:56:10.866757528Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:56:10.866938 containerd[1636]: time="2025-11-04T23:56:10.866793255Z" level=info msg="runtime interface created" Nov 4 23:56:10.866938 containerd[1636]: time="2025-11-04T23:56:10.866801816Z" level=info msg="created NRI interface" Nov 4 23:56:10.866938 containerd[1636]: time="2025-11-04T23:56:10.866812391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:56:10.866938 containerd[1636]: time="2025-11-04T23:56:10.866825695Z" level=info msg="Connect containerd service" Nov 4 23:56:10.866938 containerd[1636]: time="2025-11-04T23:56:10.866853990Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:56:10.869292 containerd[1636]: time="2025-11-04T23:56:10.869259163Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:56:11.185631 containerd[1636]: time="2025-11-04T23:56:11.185452679Z" level=info msg="Start subscribing containerd event" Nov 4 23:56:11.185631 containerd[1636]: time="2025-11-04T23:56:11.185600031Z" level=info msg="Start recovering state" Nov 4 23:56:11.185859 containerd[1636]: time="2025-11-04T23:56:11.185836304Z" level=info msg="Start event monitor" Nov 4 23:56:11.185912 containerd[1636]: time="2025-11-04T23:56:11.185897693Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:56:11.185941 containerd[1636]: time="2025-11-04T23:56:11.185918143Z" level=info msg="Start streaming server" Nov 4 23:56:11.186037 containerd[1636]: time="2025-11-04T23:56:11.186015300Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:56:11.186037 containerd[1636]: time="2025-11-04T23:56:11.186032305Z" level=info msg="runtime interface starting up..." Nov 4 23:56:11.186197 containerd[1636]: time="2025-11-04T23:56:11.186041105Z" level=info msg="starting plugins..." Nov 4 23:56:11.186443 containerd[1636]: time="2025-11-04T23:56:11.186408584Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:56:11.186552 containerd[1636]: time="2025-11-04T23:56:11.186220131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:56:11.186610 containerd[1636]: time="2025-11-04T23:56:11.186592409Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:56:11.187074 containerd[1636]: time="2025-11-04T23:56:11.186672824Z" level=info msg="containerd successfully booted in 0.355101s" Nov 4 23:56:11.186924 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:56:12.659824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:12.662533 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:56:12.697351 systemd[1]: Startup finished in 3.075s (kernel) + 9.299s (initrd) + 6.265s (userspace) = 18.640s. Nov 4 23:56:12.718378 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:56:13.512262 kubelet[1739]: E1104 23:56:13.512164 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:56:13.517792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:56:13.518092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:56:13.518891 systemd[1]: kubelet.service: Consumed 2.402s CPU time, 265.7M memory peak. Nov 4 23:56:19.538270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:56:19.539668 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:43836.service - OpenSSH per-connection server daemon (10.0.0.1:43836). Nov 4 23:56:19.619725 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 43836 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:19.622106 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:19.630148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:56:19.631420 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:56:19.638302 systemd-logind[1614]: New session 1 of user core. Nov 4 23:56:19.655491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:56:19.658863 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:56:19.678014 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:56:19.680802 systemd-logind[1614]: New session c1 of user core. Nov 4 23:56:19.830226 systemd[1758]: Queued start job for default target default.target. Nov 4 23:56:19.839499 systemd[1758]: Created slice app.slice - User Application Slice. Nov 4 23:56:19.839553 systemd[1758]: Reached target paths.target - Paths. Nov 4 23:56:19.839599 systemd[1758]: Reached target timers.target - Timers. Nov 4 23:56:19.842836 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:56:19.855193 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:56:19.855343 systemd[1758]: Reached target sockets.target - Sockets. Nov 4 23:56:19.855394 systemd[1758]: Reached target basic.target - Basic System. Nov 4 23:56:19.855437 systemd[1758]: Reached target default.target - Main User Target. Nov 4 23:56:19.855470 systemd[1758]: Startup finished in 167ms. Nov 4 23:56:19.855905 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:56:19.857967 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:56:19.933082 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:43850.service - OpenSSH per-connection server daemon (10.0.0.1:43850). Nov 4 23:56:19.982518 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 43850 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:19.984631 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:19.990053 systemd-logind[1614]: New session 2 of user core. Nov 4 23:56:19.999214 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:56:20.056234 sshd[1772]: Connection closed by 10.0.0.1 port 43850 Nov 4 23:56:20.056664 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:20.067040 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:43850.service: Deactivated successfully. Nov 4 23:56:20.069616 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:56:20.070672 systemd-logind[1614]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:56:20.074156 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:43866.service - OpenSSH per-connection server daemon (10.0.0.1:43866). Nov 4 23:56:20.075141 systemd-logind[1614]: Removed session 2. Nov 4 23:56:20.139060 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 43866 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:20.140562 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:20.145712 systemd-logind[1614]: New session 3 of user core. Nov 4 23:56:20.159124 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:56:20.210927 sshd[1781]: Connection closed by 10.0.0.1 port 43866 Nov 4 23:56:20.211263 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:20.226751 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:43866.service: Deactivated successfully. Nov 4 23:56:20.228539 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:56:20.229352 systemd-logind[1614]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:56:20.232826 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:43868.service - OpenSSH per-connection server daemon (10.0.0.1:43868). Nov 4 23:56:20.233564 systemd-logind[1614]: Removed session 3. Nov 4 23:56:20.286393 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 43868 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:20.288200 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:20.293490 systemd-logind[1614]: New session 4 of user core. Nov 4 23:56:20.300082 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:56:20.355113 sshd[1791]: Connection closed by 10.0.0.1 port 43868 Nov 4 23:56:20.355454 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:20.370203 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:43868.service: Deactivated successfully. Nov 4 23:56:20.372219 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:56:20.373053 systemd-logind[1614]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:56:20.376295 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:43870.service - OpenSSH per-connection server daemon (10.0.0.1:43870). Nov 4 23:56:20.377029 systemd-logind[1614]: Removed session 4. Nov 4 23:56:20.442430 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 43870 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:20.444402 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:20.450184 systemd-logind[1614]: New session 5 of user core. Nov 4 23:56:20.460234 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:56:20.530068 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:56:20.530496 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:56:20.552033 sudo[1801]: pam_unix(sudo:session): session closed for user root Nov 4 23:56:20.554584 sshd[1800]: Connection closed by 10.0.0.1 port 43870 Nov 4 23:56:20.555068 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:20.579422 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:43870.service: Deactivated successfully. Nov 4 23:56:20.582118 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:56:20.583226 systemd-logind[1614]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:56:20.585842 systemd-logind[1614]: Removed session 5. Nov 4 23:56:20.587910 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:43876.service - OpenSSH per-connection server daemon (10.0.0.1:43876). Nov 4 23:56:20.657795 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 43876 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:20.659804 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:20.665020 systemd-logind[1614]: New session 6 of user core. Nov 4 23:56:20.675241 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:56:20.734158 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:56:20.734596 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:56:21.001738 sudo[1813]: pam_unix(sudo:session): session closed for user root Nov 4 23:56:21.010083 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:56:21.010487 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:56:21.022242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:56:21.087570 augenrules[1835]: No rules Nov 4 23:56:21.089086 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:56:21.089617 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:56:21.091091 sudo[1812]: pam_unix(sudo:session): session closed for user root Nov 4 23:56:21.092995 sshd[1811]: Connection closed by 10.0.0.1 port 43876 Nov 4 23:56:21.093356 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:21.110350 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:43876.service: Deactivated successfully. Nov 4 23:56:21.112919 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:56:21.113866 systemd-logind[1614]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:56:21.117654 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:43892.service - OpenSSH per-connection server daemon (10.0.0.1:43892). Nov 4 23:56:21.118282 systemd-logind[1614]: Removed session 6. Nov 4 23:56:21.184499 sshd[1844]: Accepted publickey for core from 10.0.0.1 port 43892 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:56:21.186061 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:56:21.191004 systemd-logind[1614]: New session 7 of user core. Nov 4 23:56:21.202107 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:56:21.257796 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:56:21.258185 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:56:22.230237 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:56:22.260278 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:56:22.759551 dockerd[1868]: time="2025-11-04T23:56:22.759464690Z" level=info msg="Starting up" Nov 4 23:56:22.760434 dockerd[1868]: time="2025-11-04T23:56:22.760394954Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:56:22.788580 dockerd[1868]: time="2025-11-04T23:56:22.788518929Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:56:23.537548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:56:23.539809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:23.574650 dockerd[1868]: time="2025-11-04T23:56:23.573913210Z" level=info msg="Loading containers: start." Nov 4 23:56:23.613989 kernel: Initializing XFRM netlink socket Nov 4 23:56:23.898838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:23.923241 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:56:24.847021 kubelet[2026]: E1104 23:56:24.846913 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:56:24.853926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:56:24.854226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:56:24.854681 systemd[1]: kubelet.service: Consumed 463ms CPU time, 112.9M memory peak. Nov 4 23:56:24.958751 systemd-networkd[1533]: docker0: Link UP Nov 4 23:56:24.964884 dockerd[1868]: time="2025-11-04T23:56:24.964808415Z" level=info msg="Loading containers: done." Nov 4 23:56:24.983404 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2149308266-merged.mount: Deactivated successfully. Nov 4 23:56:24.987811 dockerd[1868]: time="2025-11-04T23:56:24.987733677Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:56:24.988649 dockerd[1868]: time="2025-11-04T23:56:24.988595509Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:56:24.988794 dockerd[1868]: time="2025-11-04T23:56:24.988762722Z" level=info msg="Initializing buildkit" Nov 4 23:56:25.024644 dockerd[1868]: time="2025-11-04T23:56:25.024561264Z" level=info msg="Completed buildkit initialization" Nov 4 23:56:25.031333 dockerd[1868]: time="2025-11-04T23:56:25.031267968Z" level=info msg="Daemon has completed initialization" Nov 4 23:56:25.031503 dockerd[1868]: time="2025-11-04T23:56:25.031386928Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:56:25.031718 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:56:26.047092 containerd[1636]: time="2025-11-04T23:56:26.046995805Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:56:27.354915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904551359.mount: Deactivated successfully. Nov 4 23:56:30.882645 containerd[1636]: time="2025-11-04T23:56:30.882493499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:30.922820 containerd[1636]: time="2025-11-04T23:56:30.922694046Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 4 23:56:30.924890 containerd[1636]: time="2025-11-04T23:56:30.924727671Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:30.930245 containerd[1636]: time="2025-11-04T23:56:30.930103026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:30.931870 containerd[1636]: time="2025-11-04T23:56:30.931803833Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 4.884724047s" Nov 4 23:56:30.931870 containerd[1636]: time="2025-11-04T23:56:30.931861283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:56:30.943154 containerd[1636]: time="2025-11-04T23:56:30.943040952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:56:32.763635 containerd[1636]: time="2025-11-04T23:56:32.763550340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:32.764658 containerd[1636]: time="2025-11-04T23:56:32.764623326Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 4 23:56:32.766107 containerd[1636]: time="2025-11-04T23:56:32.766022081Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:32.769565 containerd[1636]: time="2025-11-04T23:56:32.769282455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:32.770795 containerd[1636]: time="2025-11-04T23:56:32.770726303Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.827613223s" Nov 4 23:56:32.770795 containerd[1636]: time="2025-11-04T23:56:32.770786426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:56:32.772003 containerd[1636]: time="2025-11-04T23:56:32.771924978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:56:35.060763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:56:35.064896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:35.440676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:35.456304 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:56:35.631807 kubelet[2179]: E1104 23:56:35.631677 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:56:35.637291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:56:35.637565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:56:35.638254 systemd[1]: kubelet.service: Consumed 425ms CPU time, 111.3M memory peak. Nov 4 23:56:37.981647 containerd[1636]: time="2025-11-04T23:56:37.981549146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:37.984569 containerd[1636]: time="2025-11-04T23:56:37.984508259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 4 23:56:37.986229 containerd[1636]: time="2025-11-04T23:56:37.986179877Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:37.990818 containerd[1636]: time="2025-11-04T23:56:37.990762593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:37.991894 containerd[1636]: time="2025-11-04T23:56:37.991824247Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 5.219814422s" Nov 4 23:56:37.991894 containerd[1636]: time="2025-11-04T23:56:37.991891940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:56:37.992646 containerd[1636]: time="2025-11-04T23:56:37.992585203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:56:40.211404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203967009.mount: Deactivated successfully. Nov 4 23:56:41.763488 containerd[1636]: time="2025-11-04T23:56:41.763410416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:41.765261 containerd[1636]: time="2025-11-04T23:56:41.765205227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 4 23:56:41.766855 containerd[1636]: time="2025-11-04T23:56:41.766810812Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:41.769968 containerd[1636]: time="2025-11-04T23:56:41.769905816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:41.771149 containerd[1636]: time="2025-11-04T23:56:41.771036783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.778407844s" Nov 4 23:56:41.771149 containerd[1636]: time="2025-11-04T23:56:41.771113400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:56:41.771804 containerd[1636]: time="2025-11-04T23:56:41.771743921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:56:42.394867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783890268.mount: Deactivated successfully. Nov 4 23:56:43.979238 containerd[1636]: time="2025-11-04T23:56:43.979118715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:43.986635 containerd[1636]: time="2025-11-04T23:56:43.986573563Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 4 23:56:44.001373 containerd[1636]: time="2025-11-04T23:56:44.001288635Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:44.050252 containerd[1636]: time="2025-11-04T23:56:44.050110621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:44.051444 containerd[1636]: time="2025-11-04T23:56:44.051392000Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.279615802s" Nov 4 23:56:44.051534 containerd[1636]: time="2025-11-04T23:56:44.051438172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:56:44.052071 containerd[1636]: time="2025-11-04T23:56:44.052024851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:56:45.024535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917965728.mount: Deactivated successfully. Nov 4 23:56:45.394927 containerd[1636]: time="2025-11-04T23:56:45.394841738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:45.473818 containerd[1636]: time="2025-11-04T23:56:45.473721792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 4 23:56:45.603416 containerd[1636]: time="2025-11-04T23:56:45.603308523Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:45.683691 containerd[1636]: time="2025-11-04T23:56:45.683479517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:56:45.684712 containerd[1636]: time="2025-11-04T23:56:45.684660543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.632604171s" Nov 4 23:56:45.684784 containerd[1636]: time="2025-11-04T23:56:45.684711225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:56:45.685433 containerd[1636]: time="2025-11-04T23:56:45.685365909Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:56:45.787925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:56:45.790161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:46.021289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:46.025751 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:56:46.420816 kubelet[2260]: E1104 23:56:46.420730 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:56:46.425674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:56:46.425914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:56:46.426439 systemd[1]: kubelet.service: Consumed 463ms CPU time, 109.3M memory peak. Nov 4 23:56:49.678902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588447120.mount: Deactivated successfully. Nov 4 23:56:52.794757 containerd[1636]: time="2025-11-04T23:56:52.794686075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:52.795528 containerd[1636]: time="2025-11-04T23:56:52.795491377Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 4 23:56:52.796969 containerd[1636]: time="2025-11-04T23:56:52.796857096Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:52.802098 containerd[1636]: time="2025-11-04T23:56:52.802020933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:52.803602 containerd[1636]: time="2025-11-04T23:56:52.803551079Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 7.118138421s" Nov 4 23:56:52.803602 containerd[1636]: time="2025-11-04T23:56:52.803593974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:56:55.088656 update_engine[1616]: I20251104 23:56:55.088486 1616 update_attempter.cc:509] Updating boot flags... Nov 4 23:56:56.162079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:56.162335 systemd[1]: kubelet.service: Consumed 463ms CPU time, 109.3M memory peak. Nov 4 23:56:56.165514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:56.202852 systemd[1]: Reload requested from client PID 2371 ('systemctl') (unit session-7.scope)... Nov 4 23:56:56.202873 systemd[1]: Reloading... Nov 4 23:56:56.309026 zram_generator::config[2415]: No configuration found. Nov 4 23:56:56.812585 systemd[1]: Reloading finished in 609 ms. Nov 4 23:56:56.898135 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:56:56.898257 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:56:56.898618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:56.898673 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.4M memory peak. Nov 4 23:56:56.900675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:57.098716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:57.103443 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:56:57.287532 kubelet[2463]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:57.287532 kubelet[2463]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:56:57.287532 kubelet[2463]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:57.288058 kubelet[2463]: I1104 23:56:57.287615 2463 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:56:57.590397 kubelet[2463]: I1104 23:56:57.590351 2463 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:56:57.590397 kubelet[2463]: I1104 23:56:57.590387 2463 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:56:57.590783 kubelet[2463]: I1104 23:56:57.590753 2463 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:56:57.622993 kubelet[2463]: I1104 23:56:57.622691 2463 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:56:57.622993 kubelet[2463]: E1104 23:56:57.622789 2463 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:56:57.630305 kubelet[2463]: I1104 23:56:57.630275 2463 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:56:57.636693 kubelet[2463]: I1104 23:56:57.636658 2463 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:56:57.637045 kubelet[2463]: I1104 23:56:57.637008 2463 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:56:57.637284 kubelet[2463]: I1104 23:56:57.637040 2463 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:56:57.637464 kubelet[2463]: I1104 23:56:57.637291 2463 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:56:57.637464 kubelet[2463]: I1104 23:56:57.637302 2463 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:56:57.637512 kubelet[2463]: I1104 23:56:57.637474 2463 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:57.640196 kubelet[2463]: I1104 23:56:57.640159 2463 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:56:57.640196 kubelet[2463]: I1104 23:56:57.640183 2463 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:56:57.640259 kubelet[2463]: I1104 23:56:57.640214 2463 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:56:57.640303 kubelet[2463]: I1104 23:56:57.640263 2463 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:56:57.651375 kubelet[2463]: E1104 23:56:57.651161 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:57.651375 kubelet[2463]: E1104 23:56:57.651312 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:57.651375 kubelet[2463]: I1104 23:56:57.651380 2463 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:56:57.652053 kubelet[2463]: I1104 23:56:57.652023 2463 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:56:57.652712 kubelet[2463]: W1104 23:56:57.652674 2463 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:56:57.655796 kubelet[2463]: I1104 23:56:57.655762 2463 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:56:57.655876 kubelet[2463]: I1104 23:56:57.655833 2463 server.go:1289] "Started kubelet" Nov 4 23:56:57.657052 kubelet[2463]: I1104 23:56:57.656995 2463 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:56:57.662361 kubelet[2463]: I1104 23:56:57.662321 2463 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:56:57.662536 kubelet[2463]: I1104 23:56:57.662502 2463 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.663575 2463 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:56:57.670993 kubelet[2463]: E1104 23:56:57.661699 2463 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874f3131ed3227f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 23:56:57.655788159 +0000 UTC m=+0.547736857,LastTimestamp:2025-11-04 23:56:57.655788159 +0000 UTC m=+0.547736857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.664649 2463 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.664671 2463 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:56:57.670993 kubelet[2463]: E1104 23:56:57.666391 2463 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:56:57.670993 kubelet[2463]: E1104 23:56:57.666522 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.666611 2463 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.667396 2463 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:56:57.670993 kubelet[2463]: I1104 23:56:57.667507 2463 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:56:57.671391 kubelet[2463]: E1104 23:56:57.668458 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:57.671391 kubelet[2463]: E1104 23:56:57.668591 2463 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Nov 4 23:56:57.671391 kubelet[2463]: I1104 23:56:57.669179 2463 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:56:57.671391 kubelet[2463]: I1104 23:56:57.669284 2463 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:56:57.671391 kubelet[2463]: I1104 23:56:57.670434 2463 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:56:57.687405 kubelet[2463]: I1104 23:56:57.687367 2463 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:56:57.687405 kubelet[2463]: I1104 23:56:57.687392 2463 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:56:57.687405 kubelet[2463]: I1104 23:56:57.687411 2463 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:57.767570 kubelet[2463]: E1104 23:56:57.767488 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:57.868168 kubelet[2463]: E1104 23:56:57.868001 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:57.869619 kubelet[2463]: E1104 23:56:57.869588 2463 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Nov 4 23:56:57.968996 kubelet[2463]: E1104 23:56:57.968929 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.069515 kubelet[2463]: E1104 23:56:58.069440 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.170571 kubelet[2463]: E1104 23:56:58.170374 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.270357 kubelet[2463]: E1104 23:56:58.270313 2463 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Nov 4 23:56:58.271338 kubelet[2463]: E1104 23:56:58.271310 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.372089 kubelet[2463]: E1104 23:56:58.372003 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.416083 kubelet[2463]: I1104 23:56:58.416011 2463 policy_none.go:49] "None policy: Start" Nov 4 23:56:58.416083 kubelet[2463]: I1104 23:56:58.416064 2463 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:56:58.416083 kubelet[2463]: I1104 23:56:58.416090 2463 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:56:58.422579 kubelet[2463]: I1104 23:56:58.422441 2463 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:56:58.424343 kubelet[2463]: I1104 23:56:58.424299 2463 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:56:58.424343 kubelet[2463]: I1104 23:56:58.424344 2463 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:56:58.428107 kubelet[2463]: I1104 23:56:58.424378 2463 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:56:58.428107 kubelet[2463]: I1104 23:56:58.424391 2463 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:56:58.428107 kubelet[2463]: E1104 23:56:58.424513 2463 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:56:58.428107 kubelet[2463]: E1104 23:56:58.425300 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:56:58.440133 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:56:58.460016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:56:58.463730 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:56:58.472839 kubelet[2463]: E1104 23:56:58.472785 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 23:56:58.484289 kubelet[2463]: E1104 23:56:58.484233 2463 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:56:58.484662 kubelet[2463]: I1104 23:56:58.484637 2463 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:56:58.484743 kubelet[2463]: I1104 23:56:58.484655 2463 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:56:58.485296 kubelet[2463]: I1104 23:56:58.485273 2463 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:56:58.488787 kubelet[2463]: E1104 23:56:58.488712 2463 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:56:58.488787 kubelet[2463]: E1104 23:56:58.488782 2463 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 23:56:58.538534 systemd[1]: Created slice kubepods-burstable-pod2809c4dc817367a7570c776f02dba2e9.slice - libcontainer container kubepods-burstable-pod2809c4dc817367a7570c776f02dba2e9.slice. Nov 4 23:56:58.549211 kubelet[2463]: E1104 23:56:58.549151 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:58.551413 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 4 23:56:58.563494 kubelet[2463]: E1104 23:56:58.563443 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:58.567064 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 4 23:56:58.569532 kubelet[2463]: E1104 23:56:58.569510 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:58.572665 kubelet[2463]: I1104 23:56:58.572626 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:56:58.572722 kubelet[2463]: I1104 23:56:58.572664 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:56:58.572722 kubelet[2463]: I1104 23:56:58.572687 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:56:58.572722 kubelet[2463]: I1104 23:56:58.572707 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:56:58.572802 kubelet[2463]: I1104 23:56:58.572730 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:56:58.572802 kubelet[2463]: I1104 23:56:58.572769 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:56:58.572847 kubelet[2463]: I1104 23:56:58.572797 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:56:58.572847 kubelet[2463]: I1104 23:56:58.572818 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:56:58.572896 kubelet[2463]: I1104 23:56:58.572849 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:56:58.587139 kubelet[2463]: I1104 23:56:58.587097 2463 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:56:58.587559 kubelet[2463]: E1104 23:56:58.587507 2463 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Nov 4 23:56:58.644620 kubelet[2463]: E1104 23:56:58.644552 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:56:58.728991 kubelet[2463]: E1104 23:56:58.728784 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:56:58.789537 kubelet[2463]: I1104 23:56:58.789483 2463 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:56:58.790119 kubelet[2463]: E1104 23:56:58.790053 2463 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Nov 4 23:56:58.850855 kubelet[2463]: E1104 23:56:58.850750 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:58.851684 containerd[1636]: time="2025-11-04T23:56:58.851637747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2809c4dc817367a7570c776f02dba2e9,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:58.865079 kubelet[2463]: E1104 23:56:58.865022 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:58.865786 containerd[1636]: time="2025-11-04T23:56:58.865559965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:58.874212 kubelet[2463]: E1104 23:56:58.874169 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:58.874856 containerd[1636]: time="2025-11-04T23:56:58.874761402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:58.876751 containerd[1636]: time="2025-11-04T23:56:58.875993695Z" level=info msg="connecting to shim 1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee" address="unix:///run/containerd/s/cbefdab4b7bc1aa32299dc2a3434dd16480093f78b63fda9f232c3b9e2f04b89" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:58.904861 containerd[1636]: time="2025-11-04T23:56:58.904798043Z" level=info msg="connecting to shim 19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916" address="unix:///run/containerd/s/43a749ae6fdab7f55591a003c7576989c7765552cf0c67e200d1aeb5165673e4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:58.966253 systemd[1]: Started cri-containerd-1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee.scope - libcontainer container 1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee. Nov 4 23:56:58.978321 kubelet[2463]: E1104 23:56:58.978263 2463 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:56:58.982411 containerd[1636]: time="2025-11-04T23:56:58.982271257Z" level=info msg="connecting to shim 41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f" address="unix:///run/containerd/s/31859dd9d70a6fedcdb12aa094b616ef1966fd64a0a297d40afe8bc181718342" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:58.989201 systemd[1]: Started cri-containerd-19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916.scope - libcontainer container 19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916. Nov 4 23:56:59.051148 systemd[1]: Started cri-containerd-41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f.scope - libcontainer container 41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f. Nov 4 23:56:59.071321 kubelet[2463]: E1104 23:56:59.071260 2463 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Nov 4 23:56:59.088007 containerd[1636]: time="2025-11-04T23:56:59.087960338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2809c4dc817367a7570c776f02dba2e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee\"" Nov 4 23:56:59.089526 kubelet[2463]: E1104 23:56:59.089502 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.092896 containerd[1636]: time="2025-11-04T23:56:59.092851034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916\"" Nov 4 23:56:59.093811 kubelet[2463]: E1104 23:56:59.093788 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.094697 containerd[1636]: time="2025-11-04T23:56:59.094616216Z" level=info msg="CreateContainer within sandbox \"1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:56:59.120499 containerd[1636]: time="2025-11-04T23:56:59.120459421Z" level=info msg="CreateContainer within sandbox \"19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:56:59.122454 containerd[1636]: time="2025-11-04T23:56:59.122427623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f\"" Nov 4 23:56:59.123023 kubelet[2463]: E1104 23:56:59.122998 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.136458 containerd[1636]: time="2025-11-04T23:56:59.136423384Z" level=info msg="CreateContainer within sandbox \"41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:56:59.139199 containerd[1636]: time="2025-11-04T23:56:59.139155017Z" level=info msg="Container 124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:59.146868 containerd[1636]: time="2025-11-04T23:56:59.146818821Z" level=info msg="Container be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:59.149246 containerd[1636]: time="2025-11-04T23:56:59.149216861Z" level=info msg="Container 345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:59.155317 containerd[1636]: time="2025-11-04T23:56:59.155274600Z" level=info msg="CreateContainer within sandbox \"1fd29b1afe785b46dd974a4d88f0fb1df1be20ee4f7a229cf3a899241a5fdcee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86\"" Nov 4 23:56:59.155884 containerd[1636]: time="2025-11-04T23:56:59.155857506Z" level=info msg="StartContainer for \"124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86\"" Nov 4 23:56:59.157097 containerd[1636]: time="2025-11-04T23:56:59.157066524Z" level=info msg="connecting to shim 124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86" address="unix:///run/containerd/s/cbefdab4b7bc1aa32299dc2a3434dd16480093f78b63fda9f232c3b9e2f04b89" protocol=ttrpc version=3 Nov 4 23:56:59.160500 containerd[1636]: time="2025-11-04T23:56:59.160458517Z" level=info msg="CreateContainer within sandbox \"19332b0710c6e8eccaab249afb2fc91c2e7b206d0c19409c0e100be810aa6916\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023\"" Nov 4 23:56:59.161109 containerd[1636]: time="2025-11-04T23:56:59.161083478Z" level=info msg="StartContainer for \"be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023\"" Nov 4 23:56:59.162351 containerd[1636]: time="2025-11-04T23:56:59.162314576Z" level=info msg="connecting to shim be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023" address="unix:///run/containerd/s/43a749ae6fdab7f55591a003c7576989c7765552cf0c67e200d1aeb5165673e4" protocol=ttrpc version=3 Nov 4 23:56:59.166982 containerd[1636]: time="2025-11-04T23:56:59.165981229Z" level=info msg="CreateContainer within sandbox \"41c5791688e420a8f16ea834c95f445db165d49a6908ef352fb22a756e5ca10f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6\"" Nov 4 23:56:59.166982 containerd[1636]: time="2025-11-04T23:56:59.166709503Z" level=info msg="StartContainer for \"345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6\"" Nov 4 23:56:59.168282 containerd[1636]: time="2025-11-04T23:56:59.168253176Z" level=info msg="connecting to shim 345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6" address="unix:///run/containerd/s/31859dd9d70a6fedcdb12aa094b616ef1966fd64a0a297d40afe8bc181718342" protocol=ttrpc version=3 Nov 4 23:56:59.181183 systemd[1]: Started cri-containerd-124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86.scope - libcontainer container 124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86. Nov 4 23:56:59.189117 systemd[1]: Started cri-containerd-345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6.scope - libcontainer container 345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6. Nov 4 23:56:59.191765 systemd[1]: Started cri-containerd-be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023.scope - libcontainer container be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023. Nov 4 23:56:59.193643 kubelet[2463]: I1104 23:56:59.193619 2463 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:56:59.194253 kubelet[2463]: E1104 23:56:59.194176 2463 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Nov 4 23:56:59.281436 containerd[1636]: time="2025-11-04T23:56:59.281298991Z" level=info msg="StartContainer for \"124d5a782582b3b1be18176e7e8756e45d19ffdd78071e9c37ca1845e8677a86\" returns successfully" Nov 4 23:56:59.291446 containerd[1636]: time="2025-11-04T23:56:59.291391554Z" level=info msg="StartContainer for \"345e24f9369cfc6d8c85dd5e6f7bc3b67f6e9bd04db9b97a5e860331c54d35d6\" returns successfully" Nov 4 23:56:59.400473 containerd[1636]: time="2025-11-04T23:56:59.400414881Z" level=info msg="StartContainer for \"be3ba75d79be481b03e3e36b570d573a8a04ddbd051ef27e8745006b8937f023\" returns successfully" Nov 4 23:56:59.435882 kubelet[2463]: E1104 23:56:59.435843 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:59.436267 kubelet[2463]: E1104 23:56:59.435995 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.441967 kubelet[2463]: E1104 23:56:59.441263 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:59.441967 kubelet[2463]: E1104 23:56:59.441386 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.442045 kubelet[2463]: E1104 23:56:59.441997 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:56:59.442118 kubelet[2463]: E1104 23:56:59.442088 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:56:59.996585 kubelet[2463]: I1104 23:56:59.996536 2463 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:57:00.444993 kubelet[2463]: E1104 23:57:00.444841 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:57:00.445418 kubelet[2463]: E1104 23:57:00.445057 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:00.445418 kubelet[2463]: E1104 23:57:00.445135 2463 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 23:57:00.445418 kubelet[2463]: E1104 23:57:00.445318 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:00.904017 kubelet[2463]: E1104 23:57:00.903964 2463 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 23:57:01.101099 kubelet[2463]: I1104 23:57:01.101032 2463 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:57:01.167737 kubelet[2463]: I1104 23:57:01.167565 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:57:01.510118 kubelet[2463]: E1104 23:57:01.509578 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 23:57:01.510118 kubelet[2463]: I1104 23:57:01.509622 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:01.522153 kubelet[2463]: E1104 23:57:01.520765 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:01.522153 kubelet[2463]: I1104 23:57:01.520803 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:01.524262 kubelet[2463]: E1104 23:57:01.524204 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:01.653604 kubelet[2463]: I1104 23:57:01.653526 2463 apiserver.go:52] "Watching apiserver" Nov 4 23:57:01.667566 kubelet[2463]: I1104 23:57:01.667520 2463 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:57:02.332940 kubelet[2463]: I1104 23:57:02.332892 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:02.337369 kubelet[2463]: E1104 23:57:02.337324 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:02.446574 kubelet[2463]: E1104 23:57:02.446529 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:03.180777 systemd[1]: Reload requested from client PID 2750 ('systemctl') (unit session-7.scope)... Nov 4 23:57:03.180797 systemd[1]: Reloading... Nov 4 23:57:03.276045 zram_generator::config[2795]: No configuration found. Nov 4 23:57:03.522125 systemd[1]: Reloading finished in 340 ms. Nov 4 23:57:03.551200 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:57:03.577702 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:57:03.578099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:57:03.578167 systemd[1]: kubelet.service: Consumed 1.181s CPU time, 131.8M memory peak. Nov 4 23:57:03.580387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:57:03.840499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:57:03.848687 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:57:03.917509 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:57:03.917509 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:57:03.917509 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:57:03.918102 kubelet[2839]: I1104 23:57:03.917561 2839 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:57:03.926502 kubelet[2839]: I1104 23:57:03.926446 2839 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:57:03.926502 kubelet[2839]: I1104 23:57:03.926487 2839 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:57:03.926832 kubelet[2839]: I1104 23:57:03.926804 2839 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:57:03.928578 kubelet[2839]: I1104 23:57:03.928555 2839 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:57:03.932041 kubelet[2839]: I1104 23:57:03.931977 2839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:57:03.938636 kubelet[2839]: I1104 23:57:03.938599 2839 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:57:03.944556 kubelet[2839]: I1104 23:57:03.944510 2839 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:57:03.944873 kubelet[2839]: I1104 23:57:03.944814 2839 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:57:03.945082 kubelet[2839]: I1104 23:57:03.944855 2839 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:57:03.945216 kubelet[2839]: I1104 23:57:03.945091 2839 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:57:03.945216 kubelet[2839]: I1104 23:57:03.945105 2839 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:57:03.945216 kubelet[2839]: I1104 23:57:03.945160 2839 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:57:03.945399 kubelet[2839]: I1104 23:57:03.945374 2839 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:57:03.945399 kubelet[2839]: I1104 23:57:03.945392 2839 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:57:03.945399 kubelet[2839]: I1104 23:57:03.945419 2839 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:57:03.945399 kubelet[2839]: I1104 23:57:03.945456 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:57:03.947438 kubelet[2839]: I1104 23:57:03.947351 2839 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:57:03.950354 kubelet[2839]: I1104 23:57:03.948323 2839 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:57:03.954804 kubelet[2839]: I1104 23:57:03.954741 2839 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:57:03.954804 kubelet[2839]: I1104 23:57:03.954808 2839 server.go:1289] "Started kubelet" Nov 4 23:57:03.955048 kubelet[2839]: I1104 23:57:03.955017 2839 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:57:03.956086 kubelet[2839]: I1104 23:57:03.956066 2839 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:57:03.960378 kubelet[2839]: I1104 23:57:03.960080 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:57:03.967289 kubelet[2839]: E1104 23:57:03.967240 2839 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:57:03.967562 kubelet[2839]: I1104 23:57:03.967495 2839 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:57:03.968971 kubelet[2839]: I1104 23:57:03.968038 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:57:03.968971 kubelet[2839]: I1104 23:57:03.968417 2839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:57:03.969822 kubelet[2839]: I1104 23:57:03.969766 2839 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:57:03.970749 kubelet[2839]: I1104 23:57:03.970691 2839 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:57:03.971021 kubelet[2839]: I1104 23:57:03.970996 2839 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:57:03.975361 kubelet[2839]: I1104 23:57:03.975328 2839 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:57:03.978301 kubelet[2839]: I1104 23:57:03.978274 2839 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:57:03.978448 kubelet[2839]: I1104 23:57:03.978425 2839 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:57:03.986992 kubelet[2839]: I1104 23:57:03.986867 2839 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:57:03.988691 kubelet[2839]: I1104 23:57:03.988651 2839 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:57:03.988691 kubelet[2839]: I1104 23:57:03.988687 2839 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:57:03.988813 kubelet[2839]: I1104 23:57:03.988717 2839 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:57:03.988813 kubelet[2839]: I1104 23:57:03.988727 2839 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:57:03.988813 kubelet[2839]: E1104 23:57:03.988776 2839 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:57:04.027098 kubelet[2839]: I1104 23:57:04.027059 2839 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:57:04.027098 kubelet[2839]: I1104 23:57:04.027078 2839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:57:04.027098 kubelet[2839]: I1104 23:57:04.027098 2839 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:57:04.027346 kubelet[2839]: I1104 23:57:04.027235 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:57:04.027346 kubelet[2839]: I1104 23:57:04.027247 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:57:04.027346 kubelet[2839]: I1104 23:57:04.027262 2839 policy_none.go:49] "None policy: Start" Nov 4 23:57:04.027346 kubelet[2839]: I1104 23:57:04.027271 2839 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:57:04.027346 kubelet[2839]: I1104 23:57:04.027281 2839 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:57:04.027490 kubelet[2839]: I1104 23:57:04.027360 2839 state_mem.go:75] "Updated machine memory state" Nov 4 23:57:04.031531 kubelet[2839]: E1104 23:57:04.031506 2839 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:57:04.031708 kubelet[2839]: I1104 23:57:04.031685 2839 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:57:04.031797 kubelet[2839]: I1104 23:57:04.031702 2839 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:57:04.031891 kubelet[2839]: I1104 23:57:04.031867 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:57:04.039524 kubelet[2839]: E1104 23:57:04.039487 2839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:57:04.089828 kubelet[2839]: I1104 23:57:04.089757 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.090070 kubelet[2839]: I1104 23:57:04.089888 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 23:57:04.090070 kubelet[2839]: I1104 23:57:04.089773 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:04.142862 kubelet[2839]: I1104 23:57:04.142683 2839 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 23:57:04.172435 kubelet[2839]: I1104 23:57:04.172365 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.172435 kubelet[2839]: I1104 23:57:04.172419 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.172435 kubelet[2839]: I1104 23:57:04.172438 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.172932 kubelet[2839]: I1104 23:57:04.172461 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:04.172932 kubelet[2839]: I1104 23:57:04.172479 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:04.172932 kubelet[2839]: I1104 23:57:04.172563 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.172932 kubelet[2839]: I1104 23:57:04.172659 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.172932 kubelet[2839]: I1104 23:57:04.172699 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 4 23:57:04.173095 kubelet[2839]: I1104 23:57:04.172728 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2809c4dc817367a7570c776f02dba2e9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2809c4dc817367a7570c776f02dba2e9\") " pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:04.254523 kubelet[2839]: E1104 23:57:04.254470 2839 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:04.258124 kubelet[2839]: I1104 23:57:04.258079 2839 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 23:57:04.258261 kubelet[2839]: I1104 23:57:04.258180 2839 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 23:57:04.543594 kubelet[2839]: E1104 23:57:04.543523 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:04.543594 kubelet[2839]: E1104 23:57:04.543601 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:04.555908 kubelet[2839]: E1104 23:57:04.555855 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:04.947037 kubelet[2839]: I1104 23:57:04.946825 2839 apiserver.go:52] "Watching apiserver" Nov 4 23:57:04.971656 kubelet[2839]: I1104 23:57:04.971607 2839 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:57:05.006566 kubelet[2839]: I1104 23:57:05.006535 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:05.006749 kubelet[2839]: I1104 23:57:05.006631 2839 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:05.006749 kubelet[2839]: E1104 23:57:05.006714 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:05.240614 kubelet[2839]: E1104 23:57:05.239387 2839 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 23:57:05.240614 kubelet[2839]: E1104 23:57:05.239442 2839 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 23:57:05.240614 kubelet[2839]: E1104 23:57:05.239638 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:05.240614 kubelet[2839]: E1104 23:57:05.239643 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:05.240614 kubelet[2839]: I1104 23:57:05.239758 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.239728773 podStartE2EDuration="3.239728773s" podCreationTimestamp="2025-11-04 23:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:05.238715711 +0000 UTC m=+1.383205116" watchObservedRunningTime="2025-11-04 23:57:05.239728773 +0000 UTC m=+1.384218178" Nov 4 23:57:05.558849 kubelet[2839]: I1104 23:57:05.558738 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5587200079999999 podStartE2EDuration="1.558720008s" podCreationTimestamp="2025-11-04 23:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:05.522940042 +0000 UTC m=+1.667429447" watchObservedRunningTime="2025-11-04 23:57:05.558720008 +0000 UTC m=+1.703209413" Nov 4 23:57:05.559101 kubelet[2839]: I1104 23:57:05.558900 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.558895613 podStartE2EDuration="1.558895613s" podCreationTimestamp="2025-11-04 23:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:05.558523419 +0000 UTC m=+1.703012824" watchObservedRunningTime="2025-11-04 23:57:05.558895613 +0000 UTC m=+1.703385018" Nov 4 23:57:06.008768 kubelet[2839]: E1104 23:57:06.008596 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:06.009276 kubelet[2839]: E1104 23:57:06.008864 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:06.009276 kubelet[2839]: E1104 23:57:06.008892 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:07.011679 kubelet[2839]: E1104 23:57:07.011636 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:07.012269 kubelet[2839]: E1104 23:57:07.011869 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:09.092195 kubelet[2839]: I1104 23:57:09.092046 2839 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:57:09.092676 kubelet[2839]: I1104 23:57:09.092591 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:57:09.092708 containerd[1636]: time="2025-11-04T23:57:09.092400048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:57:10.103679 systemd[1]: Created slice kubepods-besteffort-pod03984661_3e45_409b_a00b_e9aa985e3665.slice - libcontainer container kubepods-besteffort-pod03984661_3e45_409b_a00b_e9aa985e3665.slice. Nov 4 23:57:10.108992 kubelet[2839]: I1104 23:57:10.108925 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmzd\" (UniqueName: \"kubernetes.io/projected/03984661-3e45-409b-a00b-e9aa985e3665-kube-api-access-frmzd\") pod \"kube-proxy-gf5qn\" (UID: \"03984661-3e45-409b-a00b-e9aa985e3665\") " pod="kube-system/kube-proxy-gf5qn" Nov 4 23:57:10.109414 kubelet[2839]: I1104 23:57:10.109001 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03984661-3e45-409b-a00b-e9aa985e3665-kube-proxy\") pod \"kube-proxy-gf5qn\" (UID: \"03984661-3e45-409b-a00b-e9aa985e3665\") " pod="kube-system/kube-proxy-gf5qn" Nov 4 23:57:10.109414 kubelet[2839]: I1104 23:57:10.109027 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03984661-3e45-409b-a00b-e9aa985e3665-xtables-lock\") pod \"kube-proxy-gf5qn\" (UID: \"03984661-3e45-409b-a00b-e9aa985e3665\") " pod="kube-system/kube-proxy-gf5qn" Nov 4 23:57:10.109414 kubelet[2839]: I1104 23:57:10.109051 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03984661-3e45-409b-a00b-e9aa985e3665-lib-modules\") pod \"kube-proxy-gf5qn\" (UID: \"03984661-3e45-409b-a00b-e9aa985e3665\") " pod="kube-system/kube-proxy-gf5qn" Nov 4 23:57:10.265545 systemd[1]: Created slice kubepods-besteffort-pod075a9221_6895_4852_9242_6a1acf13492b.slice - libcontainer container kubepods-besteffort-pod075a9221_6895_4852_9242_6a1acf13492b.slice. Nov 4 23:57:10.310622 kubelet[2839]: I1104 23:57:10.310515 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/075a9221-6895-4852-9242-6a1acf13492b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-srjrj\" (UID: \"075a9221-6895-4852-9242-6a1acf13492b\") " pod="tigera-operator/tigera-operator-7dcd859c48-srjrj" Nov 4 23:57:10.310622 kubelet[2839]: I1104 23:57:10.310624 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km52\" (UniqueName: \"kubernetes.io/projected/075a9221-6895-4852-9242-6a1acf13492b-kube-api-access-8km52\") pod \"tigera-operator-7dcd859c48-srjrj\" (UID: \"075a9221-6895-4852-9242-6a1acf13492b\") " pod="tigera-operator/tigera-operator-7dcd859c48-srjrj" Nov 4 23:57:10.413614 kubelet[2839]: E1104 23:57:10.413379 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:10.414502 containerd[1636]: time="2025-11-04T23:57:10.414460353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gf5qn,Uid:03984661-3e45-409b-a00b-e9aa985e3665,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:10.460991 containerd[1636]: time="2025-11-04T23:57:10.459463315Z" level=info msg="connecting to shim 4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5" address="unix:///run/containerd/s/c95106ae5b12852856fb7df342af80e47098384a25600f6e20a5a024033817a4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:10.533276 systemd[1]: Started cri-containerd-4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5.scope - libcontainer container 4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5. Nov 4 23:57:10.569879 containerd[1636]: time="2025-11-04T23:57:10.569828785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-srjrj,Uid:075a9221-6895-4852-9242-6a1acf13492b,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:57:10.626471 containerd[1636]: time="2025-11-04T23:57:10.626418997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gf5qn,Uid:03984661-3e45-409b-a00b-e9aa985e3665,Namespace:kube-system,Attempt:0,} returns sandbox id \"4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5\"" Nov 4 23:57:10.627180 kubelet[2839]: E1104 23:57:10.627152 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:10.651623 containerd[1636]: time="2025-11-04T23:57:10.651568020Z" level=info msg="CreateContainer within sandbox \"4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:57:10.667224 containerd[1636]: time="2025-11-04T23:57:10.666032214Z" level=info msg="Container 3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:10.677404 containerd[1636]: time="2025-11-04T23:57:10.677354207Z" level=info msg="CreateContainer within sandbox \"4316455f9dcf0eaea85c1401ecfe3a9ddf3ac6a3ac5646cf1f3f9933b9c1e2e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585\"" Nov 4 23:57:10.678276 containerd[1636]: time="2025-11-04T23:57:10.678186663Z" level=info msg="StartContainer for \"3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585\"" Nov 4 23:57:10.679836 containerd[1636]: time="2025-11-04T23:57:10.679794700Z" level=info msg="connecting to shim 10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80" address="unix:///run/containerd/s/4f7f727a37bd80fc4c1c3e0420455fa80cbce4eaa69f64d1a8b2a24f19814cb8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:10.680410 containerd[1636]: time="2025-11-04T23:57:10.680378584Z" level=info msg="connecting to shim 3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585" address="unix:///run/containerd/s/c95106ae5b12852856fb7df342af80e47098384a25600f6e20a5a024033817a4" protocol=ttrpc version=3 Nov 4 23:57:10.714134 systemd[1]: Started cri-containerd-10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80.scope - libcontainer container 10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80. Nov 4 23:57:10.715826 systemd[1]: Started cri-containerd-3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585.scope - libcontainer container 3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585. Nov 4 23:57:10.763395 containerd[1636]: time="2025-11-04T23:57:10.763344650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-srjrj,Uid:075a9221-6895-4852-9242-6a1acf13492b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80\"" Nov 4 23:57:10.766725 containerd[1636]: time="2025-11-04T23:57:10.766686403Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:57:10.771670 containerd[1636]: time="2025-11-04T23:57:10.771587041Z" level=info msg="StartContainer for \"3d0cfd90a98ff6cee6795137dd6b6cc88bded41c9c08b0a949a84a9fb6c63585\" returns successfully" Nov 4 23:57:11.019364 kubelet[2839]: E1104 23:57:11.018994 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:11.029876 kubelet[2839]: I1104 23:57:11.029785 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gf5qn" podStartSLOduration=1.029760846 podStartE2EDuration="1.029760846s" podCreationTimestamp="2025-11-04 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:57:11.029307676 +0000 UTC m=+7.173797081" watchObservedRunningTime="2025-11-04 23:57:11.029760846 +0000 UTC m=+7.174250251" Nov 4 23:57:11.386517 kubelet[2839]: E1104 23:57:11.386470 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:12.020014 kubelet[2839]: E1104 23:57:12.019941 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:12.070266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481571966.mount: Deactivated successfully. Nov 4 23:57:12.420891 containerd[1636]: time="2025-11-04T23:57:12.420828663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:12.421515 containerd[1636]: time="2025-11-04T23:57:12.421488208Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:57:12.422572 containerd[1636]: time="2025-11-04T23:57:12.422540364Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:12.424539 containerd[1636]: time="2025-11-04T23:57:12.424507247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:12.425142 containerd[1636]: time="2025-11-04T23:57:12.425096468Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.658371848s" Nov 4 23:57:12.425178 containerd[1636]: time="2025-11-04T23:57:12.425141129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:57:12.429888 containerd[1636]: time="2025-11-04T23:57:12.429842720Z" level=info msg="CreateContainer within sandbox \"10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:57:12.439148 containerd[1636]: time="2025-11-04T23:57:12.439099032Z" level=info msg="Container c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:12.445107 containerd[1636]: time="2025-11-04T23:57:12.445039990Z" level=info msg="CreateContainer within sandbox \"10393a4e7e075b28f0929a5c009321c9e7e6996f85aa6380bded4625e1d55a80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d\"" Nov 4 23:57:12.445674 containerd[1636]: time="2025-11-04T23:57:12.445647550Z" level=info msg="StartContainer for \"c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d\"" Nov 4 23:57:12.446674 containerd[1636]: time="2025-11-04T23:57:12.446621748Z" level=info msg="connecting to shim c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d" address="unix:///run/containerd/s/4f7f727a37bd80fc4c1c3e0420455fa80cbce4eaa69f64d1a8b2a24f19814cb8" protocol=ttrpc version=3 Nov 4 23:57:12.480231 systemd[1]: Started cri-containerd-c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d.scope - libcontainer container c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d. Nov 4 23:57:12.518498 containerd[1636]: time="2025-11-04T23:57:12.518384324Z" level=info msg="StartContainer for \"c7d6d148af1be61c303e4264efde3a44a2c632d90e85a138bddd4d784d34842d\" returns successfully" Nov 4 23:57:13.032551 kubelet[2839]: I1104 23:57:13.032478 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-srjrj" podStartSLOduration=1.373103583 podStartE2EDuration="3.032461899s" podCreationTimestamp="2025-11-04 23:57:10 +0000 UTC" firstStartedPulling="2025-11-04 23:57:10.766368127 +0000 UTC m=+6.910857532" lastFinishedPulling="2025-11-04 23:57:12.425726443 +0000 UTC m=+8.570215848" observedRunningTime="2025-11-04 23:57:13.032421637 +0000 UTC m=+9.176911062" watchObservedRunningTime="2025-11-04 23:57:13.032461899 +0000 UTC m=+9.176951304" Nov 4 23:57:15.526217 kubelet[2839]: E1104 23:57:15.526157 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:16.027393 kubelet[2839]: E1104 23:57:16.027357 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:16.638792 kubelet[2839]: E1104 23:57:16.638744 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:17.029214 kubelet[2839]: E1104 23:57:17.029075 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:17.926538 sudo[1848]: pam_unix(sudo:session): session closed for user root Nov 4 23:57:17.930325 sshd[1847]: Connection closed by 10.0.0.1 port 43892 Nov 4 23:57:17.931659 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:17.937397 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:43892.service: Deactivated successfully. Nov 4 23:57:17.940624 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:57:17.940874 systemd[1]: session-7.scope: Consumed 6.647s CPU time, 217.7M memory peak. Nov 4 23:57:17.943863 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:57:17.946381 systemd-logind[1614]: Removed session 7. Nov 4 23:57:22.234143 systemd[1]: Created slice kubepods-besteffort-pod0d7cddc0_6a5b_4f1c_b8c2_0b6cf90b96d2.slice - libcontainer container kubepods-besteffort-pod0d7cddc0_6a5b_4f1c_b8c2_0b6cf90b96d2.slice. Nov 4 23:57:22.291487 kubelet[2839]: I1104 23:57:22.291440 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn7k5\" (UniqueName: \"kubernetes.io/projected/0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2-kube-api-access-hn7k5\") pod \"calico-typha-68dbf5fdfb-89snq\" (UID: \"0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2\") " pod="calico-system/calico-typha-68dbf5fdfb-89snq" Nov 4 23:57:22.291487 kubelet[2839]: I1104 23:57:22.291483 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2-tigera-ca-bundle\") pod \"calico-typha-68dbf5fdfb-89snq\" (UID: \"0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2\") " pod="calico-system/calico-typha-68dbf5fdfb-89snq" Nov 4 23:57:22.291487 kubelet[2839]: I1104 23:57:22.291509 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2-typha-certs\") pod \"calico-typha-68dbf5fdfb-89snq\" (UID: \"0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2\") " pod="calico-system/calico-typha-68dbf5fdfb-89snq" Nov 4 23:57:22.334474 systemd[1]: Created slice kubepods-besteffort-podbcbd8737_95cf_4971_a218_519d23c86edf.slice - libcontainer container kubepods-besteffort-podbcbd8737_95cf_4971_a218_519d23c86edf.slice. Nov 4 23:57:22.392081 kubelet[2839]: I1104 23:57:22.392010 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-var-lib-calico\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392081 kubelet[2839]: I1104 23:57:22.392062 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bcbd8737-95cf-4971-a218-519d23c86edf-node-certs\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392081 kubelet[2839]: I1104 23:57:22.392079 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd8737-95cf-4971-a218-519d23c86edf-tigera-ca-bundle\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392346 kubelet[2839]: I1104 23:57:22.392104 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-cni-bin-dir\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392346 kubelet[2839]: I1104 23:57:22.392120 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-lib-modules\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392346 kubelet[2839]: I1104 23:57:22.392135 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-policysync\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392346 kubelet[2839]: I1104 23:57:22.392152 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-flexvol-driver-host\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392346 kubelet[2839]: I1104 23:57:22.392171 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-var-run-calico\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392537 kubelet[2839]: I1104 23:57:22.392186 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-cni-net-dir\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392537 kubelet[2839]: I1104 23:57:22.392233 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-xtables-lock\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392537 kubelet[2839]: I1104 23:57:22.392250 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bcbd8737-95cf-4971-a218-519d23c86edf-cni-log-dir\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.392537 kubelet[2839]: I1104 23:57:22.392267 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf79j\" (UniqueName: \"kubernetes.io/projected/bcbd8737-95cf-4971-a218-519d23c86edf-kube-api-access-kf79j\") pod \"calico-node-9m5mc\" (UID: \"bcbd8737-95cf-4971-a218-519d23c86edf\") " pod="calico-system/calico-node-9m5mc" Nov 4 23:57:22.498647 kubelet[2839]: E1104 23:57:22.495998 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.498647 kubelet[2839]: W1104 23:57:22.496055 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.498647 kubelet[2839]: E1104 23:57:22.496090 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.502595 kubelet[2839]: E1104 23:57:22.502518 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.502595 kubelet[2839]: W1104 23:57:22.502557 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.502595 kubelet[2839]: E1104 23:57:22.502582 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.505647 kubelet[2839]: E1104 23:57:22.505621 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.505647 kubelet[2839]: W1104 23:57:22.505641 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.505760 kubelet[2839]: E1104 23:57:22.505659 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.531742 kubelet[2839]: E1104 23:57:22.531687 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:22.539424 kubelet[2839]: E1104 23:57:22.539045 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:22.539852 containerd[1636]: time="2025-11-04T23:57:22.539808372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dbf5fdfb-89snq,Uid:0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:22.563745 containerd[1636]: time="2025-11-04T23:57:22.563559636Z" level=info msg="connecting to shim ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03" address="unix:///run/containerd/s/ffd66d4b55768a396e0181cc71734775c286574336f3e22aabad784759172b0c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:22.584683 kubelet[2839]: E1104 23:57:22.584636 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.584928 kubelet[2839]: W1104 23:57:22.584771 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.584928 kubelet[2839]: E1104 23:57:22.584793 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.585530 kubelet[2839]: E1104 23:57:22.585416 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.585530 kubelet[2839]: W1104 23:57:22.585429 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.585530 kubelet[2839]: E1104 23:57:22.585440 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.585737 kubelet[2839]: E1104 23:57:22.585723 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.585867 kubelet[2839]: W1104 23:57:22.585796 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.585867 kubelet[2839]: E1104 23:57:22.585810 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.586245 kubelet[2839]: E1104 23:57:22.586216 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.586500 kubelet[2839]: W1104 23:57:22.586323 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.586500 kubelet[2839]: E1104 23:57:22.586338 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.586775 kubelet[2839]: E1104 23:57:22.586746 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.586775 kubelet[2839]: W1104 23:57:22.586769 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.586838 kubelet[2839]: E1104 23:57:22.586784 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.587081 kubelet[2839]: E1104 23:57:22.587032 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.587081 kubelet[2839]: W1104 23:57:22.587074 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.587448 kubelet[2839]: E1104 23:57:22.587089 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.587448 kubelet[2839]: E1104 23:57:22.587390 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.587448 kubelet[2839]: W1104 23:57:22.587403 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.587448 kubelet[2839]: E1104 23:57:22.587448 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.587782 kubelet[2839]: E1104 23:57:22.587735 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.587782 kubelet[2839]: W1104 23:57:22.587754 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.587782 kubelet[2839]: E1104 23:57:22.587768 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.588127 kubelet[2839]: E1104 23:57:22.588105 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.588127 kubelet[2839]: W1104 23:57:22.588122 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.588237 kubelet[2839]: E1104 23:57:22.588139 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.588467 kubelet[2839]: E1104 23:57:22.588429 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.588467 kubelet[2839]: W1104 23:57:22.588446 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.588467 kubelet[2839]: E1104 23:57:22.588458 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.588730 kubelet[2839]: E1104 23:57:22.588714 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.588730 kubelet[2839]: W1104 23:57:22.588729 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.588865 kubelet[2839]: E1104 23:57:22.588773 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.589140 kubelet[2839]: E1104 23:57:22.589118 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.589140 kubelet[2839]: W1104 23:57:22.589136 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.589216 kubelet[2839]: E1104 23:57:22.589150 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.589544 kubelet[2839]: E1104 23:57:22.589447 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.589544 kubelet[2839]: W1104 23:57:22.589462 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.589544 kubelet[2839]: E1104 23:57:22.589475 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.589749 kubelet[2839]: E1104 23:57:22.589724 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.589796 kubelet[2839]: W1104 23:57:22.589738 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.589796 kubelet[2839]: E1104 23:57:22.589777 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.590087 kubelet[2839]: E1104 23:57:22.590051 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.590087 kubelet[2839]: W1104 23:57:22.590068 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.590087 kubelet[2839]: E1104 23:57:22.590080 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.590345 kubelet[2839]: E1104 23:57:22.590320 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.590444 kubelet[2839]: W1104 23:57:22.590364 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.590444 kubelet[2839]: E1104 23:57:22.590378 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.590673 kubelet[2839]: E1104 23:57:22.590658 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.590741 kubelet[2839]: W1104 23:57:22.590672 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.590741 kubelet[2839]: E1104 23:57:22.590719 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.591048 kubelet[2839]: E1104 23:57:22.590991 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.591048 kubelet[2839]: W1104 23:57:22.591005 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.591048 kubelet[2839]: E1104 23:57:22.591018 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.591271 kubelet[2839]: E1104 23:57:22.591246 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.591329 kubelet[2839]: W1104 23:57:22.591262 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.591329 kubelet[2839]: E1104 23:57:22.591302 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.591414 systemd[1]: Started cri-containerd-ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03.scope - libcontainer container ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03. Nov 4 23:57:22.591807 kubelet[2839]: E1104 23:57:22.591515 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.591807 kubelet[2839]: W1104 23:57:22.591525 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.591807 kubelet[2839]: E1104 23:57:22.591554 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.595982 kubelet[2839]: E1104 23:57:22.595371 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.596165 kubelet[2839]: W1104 23:57:22.596141 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.596285 kubelet[2839]: E1104 23:57:22.596255 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.596399 kubelet[2839]: I1104 23:57:22.596365 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9096f1c3-7da9-48d9-beff-7b6f2057f511-kubelet-dir\") pod \"csi-node-driver-7k7w8\" (UID: \"9096f1c3-7da9-48d9-beff-7b6f2057f511\") " pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:22.596897 kubelet[2839]: E1104 23:57:22.596882 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.597035 kubelet[2839]: W1104 23:57:22.596993 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.597035 kubelet[2839]: E1104 23:57:22.597008 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.597747 kubelet[2839]: E1104 23:57:22.597676 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.597865 kubelet[2839]: W1104 23:57:22.597834 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.597865 kubelet[2839]: E1104 23:57:22.597851 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.598085 kubelet[2839]: I1104 23:57:22.598001 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9096f1c3-7da9-48d9-beff-7b6f2057f511-socket-dir\") pod \"csi-node-driver-7k7w8\" (UID: \"9096f1c3-7da9-48d9-beff-7b6f2057f511\") " pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:22.598728 kubelet[2839]: E1104 23:57:22.598576 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.598728 kubelet[2839]: W1104 23:57:22.598698 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.598728 kubelet[2839]: E1104 23:57:22.598710 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.599370 kubelet[2839]: E1104 23:57:22.599304 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.599370 kubelet[2839]: W1104 23:57:22.599321 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.599370 kubelet[2839]: E1104 23:57:22.599333 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.600816 kubelet[2839]: E1104 23:57:22.600757 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.600816 kubelet[2839]: W1104 23:57:22.600780 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.601012 kubelet[2839]: E1104 23:57:22.600796 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.601235 kubelet[2839]: I1104 23:57:22.601179 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9096f1c3-7da9-48d9-beff-7b6f2057f511-varrun\") pod \"csi-node-driver-7k7w8\" (UID: \"9096f1c3-7da9-48d9-beff-7b6f2057f511\") " pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:22.601845 kubelet[2839]: E1104 23:57:22.601792 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.601845 kubelet[2839]: W1104 23:57:22.601809 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.601845 kubelet[2839]: E1104 23:57:22.601825 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.602269 kubelet[2839]: E1104 23:57:22.602253 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.602434 kubelet[2839]: W1104 23:57:22.602390 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.602434 kubelet[2839]: E1104 23:57:22.602410 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.602846 kubelet[2839]: E1104 23:57:22.602801 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.602846 kubelet[2839]: W1104 23:57:22.602818 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.602846 kubelet[2839]: E1104 23:57:22.602830 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.603542 kubelet[2839]: I1104 23:57:22.603039 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt84z\" (UniqueName: \"kubernetes.io/projected/9096f1c3-7da9-48d9-beff-7b6f2057f511-kube-api-access-gt84z\") pod \"csi-node-driver-7k7w8\" (UID: \"9096f1c3-7da9-48d9-beff-7b6f2057f511\") " pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:22.603796 kubelet[2839]: E1104 23:57:22.603748 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.604018 kubelet[2839]: W1104 23:57:22.603960 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.604018 kubelet[2839]: E1104 23:57:22.604003 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.604521 kubelet[2839]: E1104 23:57:22.604502 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.604618 kubelet[2839]: W1104 23:57:22.604601 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.604807 kubelet[2839]: E1104 23:57:22.604726 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.605176 kubelet[2839]: E1104 23:57:22.605149 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.605309 kubelet[2839]: W1104 23:57:22.605269 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.605309 kubelet[2839]: E1104 23:57:22.605284 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.605888 kubelet[2839]: E1104 23:57:22.605826 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.605888 kubelet[2839]: W1104 23:57:22.605843 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.605888 kubelet[2839]: E1104 23:57:22.605855 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.606454 kubelet[2839]: I1104 23:57:22.606389 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9096f1c3-7da9-48d9-beff-7b6f2057f511-registration-dir\") pod \"csi-node-driver-7k7w8\" (UID: \"9096f1c3-7da9-48d9-beff-7b6f2057f511\") " pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:22.606882 kubelet[2839]: E1104 23:57:22.606810 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.606882 kubelet[2839]: W1104 23:57:22.606823 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.606882 kubelet[2839]: E1104 23:57:22.606835 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.607391 kubelet[2839]: E1104 23:57:22.607345 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.607443 kubelet[2839]: W1104 23:57:22.607389 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.607475 kubelet[2839]: E1104 23:57:22.607425 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.639003 kubelet[2839]: E1104 23:57:22.638918 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:22.641328 containerd[1636]: time="2025-11-04T23:57:22.641255325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9m5mc,Uid:bcbd8737-95cf-4971-a218-519d23c86edf,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:22.669529 containerd[1636]: time="2025-11-04T23:57:22.669485131Z" level=info msg="connecting to shim ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf" address="unix:///run/containerd/s/3819276a7716797166f43539df7c0e731888c8de9003867db46fb33e689775ef" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:22.692003 containerd[1636]: time="2025-11-04T23:57:22.691925434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68dbf5fdfb-89snq,Uid:0d7cddc0-6a5b-4f1c-b8c2-0b6cf90b96d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03\"" Nov 4 23:57:22.693171 kubelet[2839]: E1104 23:57:22.693128 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:22.694782 containerd[1636]: time="2025-11-04T23:57:22.694744273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:57:22.707118 kubelet[2839]: E1104 23:57:22.707075 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.707195 kubelet[2839]: W1104 23:57:22.707108 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.707195 kubelet[2839]: E1104 23:57:22.707163 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.707675 kubelet[2839]: E1104 23:57:22.707623 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.707675 kubelet[2839]: W1104 23:57:22.707670 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.707737 kubelet[2839]: E1104 23:57:22.707686 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.708204 kubelet[2839]: E1104 23:57:22.708182 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.708204 kubelet[2839]: W1104 23:57:22.708200 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.708275 kubelet[2839]: E1104 23:57:22.708214 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.708605 kubelet[2839]: E1104 23:57:22.708583 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.708605 kubelet[2839]: W1104 23:57:22.708600 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.708663 kubelet[2839]: E1104 23:57:22.708613 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.709118 kubelet[2839]: E1104 23:57:22.709087 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.709118 kubelet[2839]: W1104 23:57:22.709108 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.709180 kubelet[2839]: E1104 23:57:22.709121 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.709507 kubelet[2839]: E1104 23:57:22.709487 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.709507 kubelet[2839]: W1104 23:57:22.709503 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.709573 kubelet[2839]: E1104 23:57:22.709517 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.709927 kubelet[2839]: E1104 23:57:22.709906 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.709927 kubelet[2839]: W1104 23:57:22.709924 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.710010 kubelet[2839]: E1104 23:57:22.709938 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.710380 kubelet[2839]: E1104 23:57:22.710333 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.710380 kubelet[2839]: W1104 23:57:22.710378 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.710443 kubelet[2839]: E1104 23:57:22.710393 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.710808 kubelet[2839]: E1104 23:57:22.710786 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.710808 kubelet[2839]: W1104 23:57:22.710804 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.710865 kubelet[2839]: E1104 23:57:22.710819 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.711250 kubelet[2839]: E1104 23:57:22.711227 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.711250 kubelet[2839]: W1104 23:57:22.711244 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.711315 kubelet[2839]: E1104 23:57:22.711258 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.711633 kubelet[2839]: E1104 23:57:22.711609 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.711633 kubelet[2839]: W1104 23:57:22.711627 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.711698 kubelet[2839]: E1104 23:57:22.711641 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.711931 kubelet[2839]: E1104 23:57:22.711910 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.711931 kubelet[2839]: W1104 23:57:22.711927 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.712020 kubelet[2839]: E1104 23:57:22.711939 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.712209 systemd[1]: Started cri-containerd-ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf.scope - libcontainer container ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf. Nov 4 23:57:22.712479 kubelet[2839]: E1104 23:57:22.712219 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.712479 kubelet[2839]: W1104 23:57:22.712232 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.712479 kubelet[2839]: E1104 23:57:22.712245 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.713378 kubelet[2839]: E1104 23:57:22.712632 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.713378 kubelet[2839]: W1104 23:57:22.712653 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.713378 kubelet[2839]: E1104 23:57:22.712667 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.713378 kubelet[2839]: E1104 23:57:22.713146 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.713378 kubelet[2839]: W1104 23:57:22.713186 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.713378 kubelet[2839]: E1104 23:57:22.713271 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.713850 kubelet[2839]: E1104 23:57:22.713819 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.713903 kubelet[2839]: W1104 23:57:22.713871 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.713903 kubelet[2839]: E1104 23:57:22.713888 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.714405 kubelet[2839]: E1104 23:57:22.714381 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.714405 kubelet[2839]: W1104 23:57:22.714399 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.714500 kubelet[2839]: E1104 23:57:22.714413 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.715163 kubelet[2839]: E1104 23:57:22.715130 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.715163 kubelet[2839]: W1104 23:57:22.715151 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.715241 kubelet[2839]: E1104 23:57:22.715177 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.715704 kubelet[2839]: E1104 23:57:22.715672 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.715704 kubelet[2839]: W1104 23:57:22.715695 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.715802 kubelet[2839]: E1104 23:57:22.715736 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.716455 kubelet[2839]: E1104 23:57:22.716418 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.716585 kubelet[2839]: W1104 23:57:22.716564 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.716666 kubelet[2839]: E1104 23:57:22.716604 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.717168 kubelet[2839]: E1104 23:57:22.717131 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.717168 kubelet[2839]: W1104 23:57:22.717151 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.717168 kubelet[2839]: E1104 23:57:22.717166 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.719671 kubelet[2839]: E1104 23:57:22.719622 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.719727 kubelet[2839]: W1104 23:57:22.719682 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.719727 kubelet[2839]: E1104 23:57:22.719707 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.720776 kubelet[2839]: E1104 23:57:22.720238 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.720776 kubelet[2839]: W1104 23:57:22.720288 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.720776 kubelet[2839]: E1104 23:57:22.720304 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.720904 kubelet[2839]: E1104 23:57:22.720820 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.720904 kubelet[2839]: W1104 23:57:22.720842 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.720904 kubelet[2839]: E1104 23:57:22.720871 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.721319 kubelet[2839]: E1104 23:57:22.721295 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.721319 kubelet[2839]: W1104 23:57:22.721313 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.721405 kubelet[2839]: E1104 23:57:22.721356 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.728792 kubelet[2839]: E1104 23:57:22.728749 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:22.728792 kubelet[2839]: W1104 23:57:22.728780 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:22.728792 kubelet[2839]: E1104 23:57:22.728802 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:22.750495 containerd[1636]: time="2025-11-04T23:57:22.750301206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9m5mc,Uid:bcbd8737-95cf-4971-a218-519d23c86edf,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\"" Nov 4 23:57:22.753065 kubelet[2839]: E1104 23:57:22.753031 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:23.989484 kubelet[2839]: E1104 23:57:23.989408 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:24.610685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264070344.mount: Deactivated successfully. Nov 4 23:57:25.725925 containerd[1636]: time="2025-11-04T23:57:25.725847023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:25.726685 containerd[1636]: time="2025-11-04T23:57:25.726663619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 4 23:57:25.727981 containerd[1636]: time="2025-11-04T23:57:25.727939014Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:25.730030 containerd[1636]: time="2025-11-04T23:57:25.729999923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:25.730672 containerd[1636]: time="2025-11-04T23:57:25.730643383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.035849029s" Nov 4 23:57:25.730707 containerd[1636]: time="2025-11-04T23:57:25.730672470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:57:25.731583 containerd[1636]: time="2025-11-04T23:57:25.731504527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:57:25.745713 containerd[1636]: time="2025-11-04T23:57:25.745655879Z" level=info msg="CreateContainer within sandbox \"ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:57:25.755173 containerd[1636]: time="2025-11-04T23:57:25.755129170Z" level=info msg="Container cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:25.763018 containerd[1636]: time="2025-11-04T23:57:25.762937654Z" level=info msg="CreateContainer within sandbox \"ff8be8769a8bc66118e8976b84acee96860c20c9e99e1416c0a15c4cbc2a8e03\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d\"" Nov 4 23:57:25.763817 containerd[1636]: time="2025-11-04T23:57:25.763783087Z" level=info msg="StartContainer for \"cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d\"" Nov 4 23:57:25.765345 containerd[1636]: time="2025-11-04T23:57:25.765315768Z" level=info msg="connecting to shim cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d" address="unix:///run/containerd/s/ffd66d4b55768a396e0181cc71734775c286574336f3e22aabad784759172b0c" protocol=ttrpc version=3 Nov 4 23:57:25.791103 systemd[1]: Started cri-containerd-cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d.scope - libcontainer container cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d. Nov 4 23:57:25.854234 containerd[1636]: time="2025-11-04T23:57:25.854188012Z" level=info msg="StartContainer for \"cce392a4fa9db6af45f1281615bdee9e9fcc11b5e0eb979461c6d5188d2b9a2d\" returns successfully" Nov 4 23:57:25.991246 kubelet[2839]: E1104 23:57:25.990019 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:26.055736 kubelet[2839]: E1104 23:57:26.055689 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:26.081091 kubelet[2839]: I1104 23:57:26.081029 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68dbf5fdfb-89snq" podStartSLOduration=1.043847325 podStartE2EDuration="4.081005945s" podCreationTimestamp="2025-11-04 23:57:22 +0000 UTC" firstStartedPulling="2025-11-04 23:57:22.69412233 +0000 UTC m=+18.838611735" lastFinishedPulling="2025-11-04 23:57:25.73128096 +0000 UTC m=+21.875770355" observedRunningTime="2025-11-04 23:57:26.070415293 +0000 UTC m=+22.214904698" watchObservedRunningTime="2025-11-04 23:57:26.081005945 +0000 UTC m=+22.225495340" Nov 4 23:57:26.112603 kubelet[2839]: E1104 23:57:26.112547 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.112603 kubelet[2839]: W1104 23:57:26.112579 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.112603 kubelet[2839]: E1104 23:57:26.112609 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.113003 kubelet[2839]: E1104 23:57:26.112987 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.113003 kubelet[2839]: W1104 23:57:26.112999 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.113079 kubelet[2839]: E1104 23:57:26.113010 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.113285 kubelet[2839]: E1104 23:57:26.113260 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.113285 kubelet[2839]: W1104 23:57:26.113275 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.113336 kubelet[2839]: E1104 23:57:26.113287 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.113583 kubelet[2839]: E1104 23:57:26.113563 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.113583 kubelet[2839]: W1104 23:57:26.113576 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.113630 kubelet[2839]: E1104 23:57:26.113587 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.113832 kubelet[2839]: E1104 23:57:26.113813 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.113832 kubelet[2839]: W1104 23:57:26.113824 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.113906 kubelet[2839]: E1104 23:57:26.113832 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.114134 kubelet[2839]: E1104 23:57:26.114109 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.114134 kubelet[2839]: W1104 23:57:26.114128 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.114251 kubelet[2839]: E1104 23:57:26.114142 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.114576 kubelet[2839]: E1104 23:57:26.114548 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.114576 kubelet[2839]: W1104 23:57:26.114565 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.114576 kubelet[2839]: E1104 23:57:26.114575 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.114877 kubelet[2839]: E1104 23:57:26.114854 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.114877 kubelet[2839]: W1104 23:57:26.114867 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.114877 kubelet[2839]: E1104 23:57:26.114877 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.115137 kubelet[2839]: E1104 23:57:26.115123 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.115137 kubelet[2839]: W1104 23:57:26.115134 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.115189 kubelet[2839]: E1104 23:57:26.115142 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.115352 kubelet[2839]: E1104 23:57:26.115339 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.115352 kubelet[2839]: W1104 23:57:26.115349 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.115398 kubelet[2839]: E1104 23:57:26.115357 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.115563 kubelet[2839]: E1104 23:57:26.115551 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.115589 kubelet[2839]: W1104 23:57:26.115561 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.115589 kubelet[2839]: E1104 23:57:26.115570 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.115764 kubelet[2839]: E1104 23:57:26.115750 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.115764 kubelet[2839]: W1104 23:57:26.115760 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.115820 kubelet[2839]: E1104 23:57:26.115768 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.116016 kubelet[2839]: E1104 23:57:26.116002 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.116016 kubelet[2839]: W1104 23:57:26.116014 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.116072 kubelet[2839]: E1104 23:57:26.116023 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.116211 kubelet[2839]: E1104 23:57:26.116197 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.116211 kubelet[2839]: W1104 23:57:26.116207 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.116323 kubelet[2839]: E1104 23:57:26.116215 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.116449 kubelet[2839]: E1104 23:57:26.116413 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.116449 kubelet[2839]: W1104 23:57:26.116423 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.116449 kubelet[2839]: E1104 23:57:26.116431 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.130079 kubelet[2839]: E1104 23:57:26.130051 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.130079 kubelet[2839]: W1104 23:57:26.130070 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.130079 kubelet[2839]: E1104 23:57:26.130082 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.130314 kubelet[2839]: E1104 23:57:26.130300 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.130314 kubelet[2839]: W1104 23:57:26.130311 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.130371 kubelet[2839]: E1104 23:57:26.130320 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.130568 kubelet[2839]: E1104 23:57:26.130556 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.130568 kubelet[2839]: W1104 23:57:26.130566 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.130627 kubelet[2839]: E1104 23:57:26.130574 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.130916 kubelet[2839]: E1104 23:57:26.130860 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.130916 kubelet[2839]: W1104 23:57:26.130903 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.130988 kubelet[2839]: E1104 23:57:26.130927 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.131173 kubelet[2839]: E1104 23:57:26.131156 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.131173 kubelet[2839]: W1104 23:57:26.131168 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.131235 kubelet[2839]: E1104 23:57:26.131178 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.131394 kubelet[2839]: E1104 23:57:26.131377 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.131394 kubelet[2839]: W1104 23:57:26.131388 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.131452 kubelet[2839]: E1104 23:57:26.131397 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.131633 kubelet[2839]: E1104 23:57:26.131618 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.131633 kubelet[2839]: W1104 23:57:26.131629 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.131681 kubelet[2839]: E1104 23:57:26.131639 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.131870 kubelet[2839]: E1104 23:57:26.131852 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.131870 kubelet[2839]: W1104 23:57:26.131866 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.131930 kubelet[2839]: E1104 23:57:26.131876 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.132140 kubelet[2839]: E1104 23:57:26.132112 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.132140 kubelet[2839]: W1104 23:57:26.132124 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.132140 kubelet[2839]: E1104 23:57:26.132132 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.132347 kubelet[2839]: E1104 23:57:26.132322 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.132347 kubelet[2839]: W1104 23:57:26.132331 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.132347 kubelet[2839]: E1104 23:57:26.132339 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.132587 kubelet[2839]: E1104 23:57:26.132556 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.132587 kubelet[2839]: W1104 23:57:26.132567 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.132587 kubelet[2839]: E1104 23:57:26.132575 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.132820 kubelet[2839]: E1104 23:57:26.132795 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.132820 kubelet[2839]: W1104 23:57:26.132806 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.132820 kubelet[2839]: E1104 23:57:26.132815 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.133304 kubelet[2839]: E1104 23:57:26.133262 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.133359 kubelet[2839]: W1104 23:57:26.133304 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.133359 kubelet[2839]: E1104 23:57:26.133336 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.133611 kubelet[2839]: E1104 23:57:26.133593 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.133611 kubelet[2839]: W1104 23:57:26.133607 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.133667 kubelet[2839]: E1104 23:57:26.133618 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.133848 kubelet[2839]: E1104 23:57:26.133832 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.133848 kubelet[2839]: W1104 23:57:26.133844 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.133907 kubelet[2839]: E1104 23:57:26.133854 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.134152 kubelet[2839]: E1104 23:57:26.134136 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.134152 kubelet[2839]: W1104 23:57:26.134149 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.134213 kubelet[2839]: E1104 23:57:26.134159 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.134452 kubelet[2839]: E1104 23:57:26.134437 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.134452 kubelet[2839]: W1104 23:57:26.134450 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.134508 kubelet[2839]: E1104 23:57:26.134462 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:26.134766 kubelet[2839]: E1104 23:57:26.134748 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:26.134766 kubelet[2839]: W1104 23:57:26.134762 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:26.134825 kubelet[2839]: E1104 23:57:26.134772 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.047271 containerd[1636]: time="2025-11-04T23:57:27.047211204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:27.048035 containerd[1636]: time="2025-11-04T23:57:27.047988216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 4 23:57:27.049265 containerd[1636]: time="2025-11-04T23:57:27.049205968Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:27.051536 containerd[1636]: time="2025-11-04T23:57:27.051490319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:27.052135 containerd[1636]: time="2025-11-04T23:57:27.052096309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.320564888s" Nov 4 23:57:27.052135 containerd[1636]: time="2025-11-04T23:57:27.052133012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:57:27.055926 containerd[1636]: time="2025-11-04T23:57:27.055870676Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:57:27.056287 kubelet[2839]: E1104 23:57:27.056259 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:27.066477 containerd[1636]: time="2025-11-04T23:57:27.066418937Z" level=info msg="Container b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:27.075067 containerd[1636]: time="2025-11-04T23:57:27.075017246Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\"" Nov 4 23:57:27.075812 containerd[1636]: time="2025-11-04T23:57:27.075760831Z" level=info msg="StartContainer for \"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\"" Nov 4 23:57:27.077547 containerd[1636]: time="2025-11-04T23:57:27.077518532Z" level=info msg="connecting to shim b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f" address="unix:///run/containerd/s/3819276a7716797166f43539df7c0e731888c8de9003867db46fb33e689775ef" protocol=ttrpc version=3 Nov 4 23:57:27.103200 systemd[1]: Started cri-containerd-b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f.scope - libcontainer container b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f. Nov 4 23:57:27.120745 kubelet[2839]: E1104 23:57:27.120691 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.120745 kubelet[2839]: W1104 23:57:27.120723 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.120745 kubelet[2839]: E1104 23:57:27.120747 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.121058 kubelet[2839]: E1104 23:57:27.121035 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.121058 kubelet[2839]: W1104 23:57:27.121051 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.121136 kubelet[2839]: E1104 23:57:27.121065 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.121311 kubelet[2839]: E1104 23:57:27.121283 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.121311 kubelet[2839]: W1104 23:57:27.121297 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.121311 kubelet[2839]: E1104 23:57:27.121306 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.121546 kubelet[2839]: E1104 23:57:27.121521 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.121546 kubelet[2839]: W1104 23:57:27.121539 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.121601 kubelet[2839]: E1104 23:57:27.121559 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.122635 kubelet[2839]: E1104 23:57:27.122575 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.122635 kubelet[2839]: W1104 23:57:27.122600 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.122761 kubelet[2839]: E1104 23:57:27.122627 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.122858 kubelet[2839]: E1104 23:57:27.122840 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.122858 kubelet[2839]: W1104 23:57:27.122852 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123023 kubelet[2839]: E1104 23:57:27.122862 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.123110 kubelet[2839]: E1104 23:57:27.123061 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.123110 kubelet[2839]: W1104 23:57:27.123071 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123110 kubelet[2839]: E1104 23:57:27.123080 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.123259 kubelet[2839]: E1104 23:57:27.123244 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.123259 kubelet[2839]: W1104 23:57:27.123251 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123259 kubelet[2839]: E1104 23:57:27.123260 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.123516 kubelet[2839]: E1104 23:57:27.123496 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.123516 kubelet[2839]: W1104 23:57:27.123509 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123516 kubelet[2839]: E1104 23:57:27.123518 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.123732 kubelet[2839]: E1104 23:57:27.123707 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.123732 kubelet[2839]: W1104 23:57:27.123720 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123732 kubelet[2839]: E1104 23:57:27.123729 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.123922 kubelet[2839]: E1104 23:57:27.123905 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.123922 kubelet[2839]: W1104 23:57:27.123917 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.123922 kubelet[2839]: E1104 23:57:27.123926 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.124158 kubelet[2839]: E1104 23:57:27.124140 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.124158 kubelet[2839]: W1104 23:57:27.124152 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.124211 kubelet[2839]: E1104 23:57:27.124161 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.124374 kubelet[2839]: E1104 23:57:27.124346 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.124374 kubelet[2839]: W1104 23:57:27.124359 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.124374 kubelet[2839]: E1104 23:57:27.124368 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.124551 kubelet[2839]: E1104 23:57:27.124535 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.124551 kubelet[2839]: W1104 23:57:27.124546 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.124609 kubelet[2839]: E1104 23:57:27.124556 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.124759 kubelet[2839]: E1104 23:57:27.124742 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.124759 kubelet[2839]: W1104 23:57:27.124754 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.124809 kubelet[2839]: E1104 23:57:27.124766 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.135720 kubelet[2839]: E1104 23:57:27.135689 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.135720 kubelet[2839]: W1104 23:57:27.135708 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.135720 kubelet[2839]: E1104 23:57:27.135718 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.136090 kubelet[2839]: E1104 23:57:27.135889 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.136090 kubelet[2839]: W1104 23:57:27.135900 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.136090 kubelet[2839]: E1104 23:57:27.135909 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.136090 kubelet[2839]: E1104 23:57:27.136103 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.136198 kubelet[2839]: W1104 23:57:27.136111 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.136198 kubelet[2839]: E1104 23:57:27.136128 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.136454 kubelet[2839]: E1104 23:57:27.136325 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.136454 kubelet[2839]: W1104 23:57:27.136334 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.136454 kubelet[2839]: E1104 23:57:27.136346 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.136587 kubelet[2839]: E1104 23:57:27.136575 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.136587 kubelet[2839]: W1104 23:57:27.136585 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.136649 kubelet[2839]: E1104 23:57:27.136595 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.136965 kubelet[2839]: E1104 23:57:27.136881 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.136965 kubelet[2839]: W1104 23:57:27.136895 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.136965 kubelet[2839]: E1104 23:57:27.136904 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.137250 kubelet[2839]: E1104 23:57:27.137230 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.137250 kubelet[2839]: W1104 23:57:27.137243 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.137250 kubelet[2839]: E1104 23:57:27.137253 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.137571 kubelet[2839]: E1104 23:57:27.137553 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.137571 kubelet[2839]: W1104 23:57:27.137565 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.137571 kubelet[2839]: E1104 23:57:27.137573 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.137780 kubelet[2839]: E1104 23:57:27.137765 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.137780 kubelet[2839]: W1104 23:57:27.137776 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.137839 kubelet[2839]: E1104 23:57:27.137784 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.138111 kubelet[2839]: E1104 23:57:27.138088 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.138111 kubelet[2839]: W1104 23:57:27.138103 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.138111 kubelet[2839]: E1104 23:57:27.138112 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.138376 kubelet[2839]: E1104 23:57:27.138320 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.138376 kubelet[2839]: W1104 23:57:27.138334 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.138376 kubelet[2839]: E1104 23:57:27.138364 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.138661 kubelet[2839]: E1104 23:57:27.138597 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.138661 kubelet[2839]: W1104 23:57:27.138611 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.138661 kubelet[2839]: E1104 23:57:27.138620 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.138927 kubelet[2839]: E1104 23:57:27.138895 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.138927 kubelet[2839]: W1104 23:57:27.138908 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.139215 kubelet[2839]: E1104 23:57:27.138917 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.139253 kubelet[2839]: E1104 23:57:27.139232 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.139253 kubelet[2839]: W1104 23:57:27.139244 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.139253 kubelet[2839]: E1104 23:57:27.139253 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.139552 kubelet[2839]: E1104 23:57:27.139506 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.139552 kubelet[2839]: W1104 23:57:27.139544 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.139552 kubelet[2839]: E1104 23:57:27.139554 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.139782 kubelet[2839]: E1104 23:57:27.139750 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.139782 kubelet[2839]: W1104 23:57:27.139762 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.139782 kubelet[2839]: E1104 23:57:27.139771 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.140194 kubelet[2839]: E1104 23:57:27.140130 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.140194 kubelet[2839]: W1104 23:57:27.140144 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.140194 kubelet[2839]: E1104 23:57:27.140155 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.140975 kubelet[2839]: E1104 23:57:27.140785 2839 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:57:27.140975 kubelet[2839]: W1104 23:57:27.140799 2839 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:57:27.140975 kubelet[2839]: E1104 23:57:27.140828 2839 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:57:27.160320 containerd[1636]: time="2025-11-04T23:57:27.160267822Z" level=info msg="StartContainer for \"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\" returns successfully" Nov 4 23:57:27.172285 systemd[1]: cri-containerd-b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f.scope: Deactivated successfully. Nov 4 23:57:27.174225 containerd[1636]: time="2025-11-04T23:57:27.174176645Z" level=info msg="received exit event container_id:\"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\" id:\"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\" pid:3537 exited_at:{seconds:1762300647 nanos:173677509}" Nov 4 23:57:27.174348 containerd[1636]: time="2025-11-04T23:57:27.174207577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\" id:\"b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f\" pid:3537 exited_at:{seconds:1762300647 nanos:173677509}" Nov 4 23:57:27.197653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ae7dad3d58da3526057b4e96a44f9fe9a05406f9c107c509e3469d6ffdfc0f-rootfs.mount: Deactivated successfully. Nov 4 23:57:27.990257 kubelet[2839]: E1104 23:57:27.990186 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:28.060395 kubelet[2839]: E1104 23:57:28.060333 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:28.060855 kubelet[2839]: E1104 23:57:28.060606 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:29.065111 kubelet[2839]: E1104 23:57:29.065051 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:29.066218 containerd[1636]: time="2025-11-04T23:57:29.066185011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:57:29.992106 kubelet[2839]: E1104 23:57:29.992037 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:31.991590 kubelet[2839]: E1104 23:57:31.991520 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:32.749593 containerd[1636]: time="2025-11-04T23:57:32.749487857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:32.750992 containerd[1636]: time="2025-11-04T23:57:32.750930176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:57:32.752584 containerd[1636]: time="2025-11-04T23:57:32.752546569Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:32.755594 containerd[1636]: time="2025-11-04T23:57:32.755547606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:32.756523 containerd[1636]: time="2025-11-04T23:57:32.756480083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.690252028s" Nov 4 23:57:32.756523 containerd[1636]: time="2025-11-04T23:57:32.756521226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:57:32.762390 containerd[1636]: time="2025-11-04T23:57:32.762329736Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:57:32.773586 containerd[1636]: time="2025-11-04T23:57:32.773479171Z" level=info msg="Container 66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:32.865916 containerd[1636]: time="2025-11-04T23:57:32.865840477Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\"" Nov 4 23:57:32.866848 containerd[1636]: time="2025-11-04T23:57:32.866785730Z" level=info msg="StartContainer for \"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\"" Nov 4 23:57:32.868670 containerd[1636]: time="2025-11-04T23:57:32.868637490Z" level=info msg="connecting to shim 66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8" address="unix:///run/containerd/s/3819276a7716797166f43539df7c0e731888c8de9003867db46fb33e689775ef" protocol=ttrpc version=3 Nov 4 23:57:32.901150 systemd[1]: Started cri-containerd-66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8.scope - libcontainer container 66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8. Nov 4 23:57:33.085100 containerd[1636]: time="2025-11-04T23:57:33.085010546Z" level=info msg="StartContainer for \"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\" returns successfully" Nov 4 23:57:33.991929 kubelet[2839]: E1104 23:57:33.991865 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:34.091603 kubelet[2839]: E1104 23:57:34.091534 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:35.093361 kubelet[2839]: E1104 23:57:35.093300 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:35.600433 systemd[1]: cri-containerd-66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8.scope: Deactivated successfully. Nov 4 23:57:35.600917 systemd[1]: cri-containerd-66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8.scope: Consumed 702ms CPU time, 177.7M memory peak, 3.5M read from disk, 171.3M written to disk. Nov 4 23:57:35.601739 containerd[1636]: time="2025-11-04T23:57:35.601690457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\" id:\"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\" pid:3632 exited_at:{seconds:1762300655 nanos:601268995}" Nov 4 23:57:35.602190 containerd[1636]: time="2025-11-04T23:57:35.601794221Z" level=info msg="received exit event container_id:\"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\" id:\"66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8\" pid:3632 exited_at:{seconds:1762300655 nanos:601268995}" Nov 4 23:57:35.630331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66b4c6a69217882e1956e13c0e4617ee51f0eda151ba85d945faa477bcdf69c8-rootfs.mount: Deactivated successfully. Nov 4 23:57:35.687245 kubelet[2839]: I1104 23:57:35.687199 2839 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:57:36.064309 systemd[1]: Created slice kubepods-burstable-pod17151500_682c_4d03_96a4_c629a2968c8a.slice - libcontainer container kubepods-burstable-pod17151500_682c_4d03_96a4_c629a2968c8a.slice. Nov 4 23:57:36.075116 systemd[1]: Created slice kubepods-besteffort-podb35658f8_29c0_438d_8549_d61428e8d39f.slice - libcontainer container kubepods-besteffort-podb35658f8_29c0_438d_8549_d61428e8d39f.slice. Nov 4 23:57:36.083860 systemd[1]: Created slice kubepods-besteffort-podd4967576_a017_4231_9dac_e0dcfb7a3e59.slice - libcontainer container kubepods-besteffort-podd4967576_a017_4231_9dac_e0dcfb7a3e59.slice. Nov 4 23:57:36.091376 systemd[1]: Created slice kubepods-besteffort-pod9096f1c3_7da9_48d9_beff_7b6f2057f511.slice - libcontainer container kubepods-besteffort-pod9096f1c3_7da9_48d9_beff_7b6f2057f511.slice. Nov 4 23:57:36.096284 kubelet[2839]: I1104 23:57:36.096244 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c5d1d235-24ef-43b3-abad-7fa9db4b88ef-goldmane-key-pair\") pod \"goldmane-666569f655-85l6w\" (UID: \"c5d1d235-24ef-43b3-abad-7fa9db4b88ef\") " pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.096284 kubelet[2839]: I1104 23:57:36.096280 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxpx\" (UniqueName: \"kubernetes.io/projected/c5d1d235-24ef-43b3-abad-7fa9db4b88ef-kube-api-access-jsxpx\") pod \"goldmane-666569f655-85l6w\" (UID: \"c5d1d235-24ef-43b3-abad-7fa9db4b88ef\") " pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.096814 kubelet[2839]: I1104 23:57:36.096297 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpx5\" (UniqueName: \"kubernetes.io/projected/b35658f8-29c0-438d-8549-d61428e8d39f-kube-api-access-tjpx5\") pod \"calico-apiserver-5d7b6c5897-d22xf\" (UID: \"b35658f8-29c0-438d-8549-d61428e8d39f\") " pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:36.096814 kubelet[2839]: I1104 23:57:36.096316 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fa3182c-166f-4b9d-a6cd-5926136039a6-config-volume\") pod \"coredns-674b8bbfcf-x89z7\" (UID: \"6fa3182c-166f-4b9d-a6cd-5926136039a6\") " pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:36.096814 kubelet[2839]: I1104 23:57:36.096330 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwkkw\" (UniqueName: \"kubernetes.io/projected/6fa3182c-166f-4b9d-a6cd-5926136039a6-kube-api-access-fwkkw\") pod \"coredns-674b8bbfcf-x89z7\" (UID: \"6fa3182c-166f-4b9d-a6cd-5926136039a6\") " pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:36.096814 kubelet[2839]: I1104 23:57:36.096343 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-ca-bundle\") pod \"whisker-7bcd588cdf-fqzbh\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:36.096814 kubelet[2839]: I1104 23:57:36.096368 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rnt2\" (UniqueName: \"kubernetes.io/projected/d4967576-a017-4231-9dac-e0dcfb7a3e59-kube-api-access-5rnt2\") pod \"whisker-7bcd588cdf-fqzbh\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:36.098080 containerd[1636]: time="2025-11-04T23:57:36.096782341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:36.098155 kubelet[2839]: I1104 23:57:36.096393 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbk4\" (UniqueName: \"kubernetes.io/projected/0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a-kube-api-access-nxbk4\") pod \"calico-apiserver-5d7b6c5897-jq7ws\" (UID: \"0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a\") " pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:36.098155 kubelet[2839]: I1104 23:57:36.096412 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-backend-key-pair\") pod \"whisker-7bcd588cdf-fqzbh\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:36.098155 kubelet[2839]: I1104 23:57:36.096426 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5d1d235-24ef-43b3-abad-7fa9db4b88ef-config\") pod \"goldmane-666569f655-85l6w\" (UID: \"c5d1d235-24ef-43b3-abad-7fa9db4b88ef\") " pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.098155 kubelet[2839]: I1104 23:57:36.096441 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf22170-0a60-4bf0-be14-045d1e27faa2-tigera-ca-bundle\") pod \"calico-kube-controllers-78cc59d946-ppm4m\" (UID: \"adf22170-0a60-4bf0-be14-045d1e27faa2\") " pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:36.098155 kubelet[2839]: I1104 23:57:36.096457 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a-calico-apiserver-certs\") pod \"calico-apiserver-5d7b6c5897-jq7ws\" (UID: \"0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a\") " pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:36.098332 kubelet[2839]: I1104 23:57:36.096474 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5d1d235-24ef-43b3-abad-7fa9db4b88ef-goldmane-ca-bundle\") pod \"goldmane-666569f655-85l6w\" (UID: \"c5d1d235-24ef-43b3-abad-7fa9db4b88ef\") " pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.098332 kubelet[2839]: I1104 23:57:36.096490 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwkj\" (UniqueName: \"kubernetes.io/projected/17151500-682c-4d03-96a4-c629a2968c8a-kube-api-access-shwkj\") pod \"coredns-674b8bbfcf-xvn9q\" (UID: \"17151500-682c-4d03-96a4-c629a2968c8a\") " pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:36.098332 kubelet[2839]: I1104 23:57:36.096505 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b35658f8-29c0-438d-8549-d61428e8d39f-calico-apiserver-certs\") pod \"calico-apiserver-5d7b6c5897-d22xf\" (UID: \"b35658f8-29c0-438d-8549-d61428e8d39f\") " pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:36.098332 kubelet[2839]: I1104 23:57:36.096525 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17151500-682c-4d03-96a4-c629a2968c8a-config-volume\") pod \"coredns-674b8bbfcf-xvn9q\" (UID: \"17151500-682c-4d03-96a4-c629a2968c8a\") " pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:36.098332 kubelet[2839]: I1104 23:57:36.096558 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmdq\" (UniqueName: \"kubernetes.io/projected/adf22170-0a60-4bf0-be14-045d1e27faa2-kube-api-access-lxmdq\") pod \"calico-kube-controllers-78cc59d946-ppm4m\" (UID: \"adf22170-0a60-4bf0-be14-045d1e27faa2\") " pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:36.100579 systemd[1]: Created slice kubepods-besteffort-pod0bbb99b0_26ba_46ec_81d6_2d0aac8c5b8a.slice - libcontainer container kubepods-besteffort-pod0bbb99b0_26ba_46ec_81d6_2d0aac8c5b8a.slice. Nov 4 23:57:36.105968 kubelet[2839]: E1104 23:57:36.105500 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:36.107814 containerd[1636]: time="2025-11-04T23:57:36.107758217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:57:36.120389 systemd[1]: Created slice kubepods-burstable-pod6fa3182c_166f_4b9d_a6cd_5926136039a6.slice - libcontainer container kubepods-burstable-pod6fa3182c_166f_4b9d_a6cd_5926136039a6.slice. Nov 4 23:57:36.131803 systemd[1]: Created slice kubepods-besteffort-podc5d1d235_24ef_43b3_abad_7fa9db4b88ef.slice - libcontainer container kubepods-besteffort-podc5d1d235_24ef_43b3_abad_7fa9db4b88ef.slice. Nov 4 23:57:36.142809 systemd[1]: Created slice kubepods-besteffort-podadf22170_0a60_4bf0_be14_045d1e27faa2.slice - libcontainer container kubepods-besteffort-podadf22170_0a60_4bf0_be14_045d1e27faa2.slice. Nov 4 23:57:36.289229 containerd[1636]: time="2025-11-04T23:57:36.289146632Z" level=error msg="Failed to destroy network for sandbox \"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.290591 containerd[1636]: time="2025-11-04T23:57:36.290549026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.290851 kubelet[2839]: E1104 23:57:36.290785 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.290933 kubelet[2839]: E1104 23:57:36.290883 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:36.290933 kubelet[2839]: E1104 23:57:36.290914 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:36.291047 kubelet[2839]: E1104 23:57:36.291016 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec431d3e0cc3b0b3bcfe0eb9d342e5281177fcfc96c712389ec120ec2ace5052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:36.371853 kubelet[2839]: E1104 23:57:36.370985 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:36.372025 containerd[1636]: time="2025-11-04T23:57:36.371900866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:36.381125 containerd[1636]: time="2025-11-04T23:57:36.380001836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:57:36.389449 containerd[1636]: time="2025-11-04T23:57:36.389398470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcd588cdf-fqzbh,Uid:d4967576-a017-4231-9dac-e0dcfb7a3e59,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:36.416129 containerd[1636]: time="2025-11-04T23:57:36.415265201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:57:36.427244 kubelet[2839]: E1104 23:57:36.427193 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:36.427960 containerd[1636]: time="2025-11-04T23:57:36.427899056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:36.440973 containerd[1636]: time="2025-11-04T23:57:36.440883904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:36.454613 containerd[1636]: time="2025-11-04T23:57:36.454315652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:36.487924 containerd[1636]: time="2025-11-04T23:57:36.487863385Z" level=error msg="Failed to destroy network for sandbox \"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.490673 containerd[1636]: time="2025-11-04T23:57:36.490578156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.492264 kubelet[2839]: E1104 23:57:36.492212 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.492376 kubelet[2839]: E1104 23:57:36.492290 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:36.492376 kubelet[2839]: E1104 23:57:36.492315 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:36.492469 kubelet[2839]: E1104 23:57:36.492377 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xvn9q_kube-system(17151500-682c-4d03-96a4-c629a2968c8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xvn9q_kube-system(17151500-682c-4d03-96a4-c629a2968c8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f10f109186503c457fc34a484cce9150cdff9b23bed2bc1e4e0f1b3bd79754b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xvn9q" podUID="17151500-682c-4d03-96a4-c629a2968c8a" Nov 4 23:57:36.494286 containerd[1636]: time="2025-11-04T23:57:36.494138495Z" level=error msg="Failed to destroy network for sandbox \"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.497401 containerd[1636]: time="2025-11-04T23:57:36.496190551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.498002 kubelet[2839]: E1104 23:57:36.497861 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.498002 kubelet[2839]: E1104 23:57:36.497985 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:36.498126 kubelet[2839]: E1104 23:57:36.498018 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:36.498126 kubelet[2839]: E1104 23:57:36.498079 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb33b1f30a51f080ed40473cce2714f6ec238b3bc1d357b634e62607ef930c91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:57:36.508736 containerd[1636]: time="2025-11-04T23:57:36.508667937Z" level=error msg="Failed to destroy network for sandbox \"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.510773 containerd[1636]: time="2025-11-04T23:57:36.510695333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcd588cdf-fqzbh,Uid:d4967576-a017-4231-9dac-e0dcfb7a3e59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.512406 kubelet[2839]: E1104 23:57:36.511395 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.512406 kubelet[2839]: E1104 23:57:36.511478 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:36.512406 kubelet[2839]: E1104 23:57:36.511501 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:36.512531 kubelet[2839]: E1104 23:57:36.511555 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bcd588cdf-fqzbh_calico-system(d4967576-a017-4231-9dac-e0dcfb7a3e59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bcd588cdf-fqzbh_calico-system(d4967576-a017-4231-9dac-e0dcfb7a3e59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adbd73ef8d1a572657aff5e1710c93e960cec3a621c4d98d0ffdc9bdef66055b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bcd588cdf-fqzbh" podUID="d4967576-a017-4231-9dac-e0dcfb7a3e59" Nov 4 23:57:36.539818 containerd[1636]: time="2025-11-04T23:57:36.539682537Z" level=error msg="Failed to destroy network for sandbox \"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.542153 containerd[1636]: time="2025-11-04T23:57:36.542121635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.542603 kubelet[2839]: E1104 23:57:36.542560 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.542745 kubelet[2839]: E1104 23:57:36.542726 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:36.542930 kubelet[2839]: E1104 23:57:36.542813 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:36.542930 kubelet[2839]: E1104 23:57:36.542892 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ffd4e9f4307f9c9e76148a2faf94512b196bd273cf693ae1a99c0d297605e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:57:36.543066 containerd[1636]: time="2025-11-04T23:57:36.543004435Z" level=error msg="Failed to destroy network for sandbox \"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.544309 containerd[1636]: time="2025-11-04T23:57:36.544237005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.544959 kubelet[2839]: E1104 23:57:36.544728 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.544959 kubelet[2839]: E1104 23:57:36.544813 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.544959 kubelet[2839]: E1104 23:57:36.544837 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:36.545178 kubelet[2839]: E1104 23:57:36.544888 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b67ab04000895fa4fb764252ea016fbf5465b56dcb9923fdc658b915a1967e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:57:36.551470 containerd[1636]: time="2025-11-04T23:57:36.551419083Z" level=error msg="Failed to destroy network for sandbox \"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.552124 containerd[1636]: time="2025-11-04T23:57:36.552092641Z" level=error msg="Failed to destroy network for sandbox \"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.552845 containerd[1636]: time="2025-11-04T23:57:36.552798192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.553303 kubelet[2839]: E1104 23:57:36.553254 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.553374 kubelet[2839]: E1104 23:57:36.553327 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:36.553374 kubelet[2839]: E1104 23:57:36.553349 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:36.553446 kubelet[2839]: E1104 23:57:36.553398 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x89z7_kube-system(6fa3182c-166f-4b9d-a6cd-5926136039a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x89z7_kube-system(6fa3182c-166f-4b9d-a6cd-5926136039a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12f367c67f9b217a0cc15a03dbed8e6373ce2ef6c7322e1b36b96e61cd4e4815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x89z7" podUID="6fa3182c-166f-4b9d-a6cd-5926136039a6" Nov 4 23:57:36.553999 containerd[1636]: time="2025-11-04T23:57:36.553959281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.554156 kubelet[2839]: E1104 23:57:36.554114 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:36.554221 kubelet[2839]: E1104 23:57:36.554183 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:36.554253 kubelet[2839]: E1104 23:57:36.554220 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:36.554295 kubelet[2839]: E1104 23:57:36.554271 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b87e3572152752d085e9c0f04631e67fa765de4e82f4b8a494ee39f0958fb9c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:57:36.638363 systemd[1]: run-netns-cni\x2d17d34dcd\x2da988\x2d6838\x2da52f\x2d3ce2b0aa7cc3.mount: Deactivated successfully. Nov 4 23:57:45.013977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781748127.mount: Deactivated successfully. Nov 4 23:57:46.990421 containerd[1636]: time="2025-11-04T23:57:46.990353549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:46.990929 containerd[1636]: time="2025-11-04T23:57:46.990353629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:47.990421 containerd[1636]: time="2025-11-04T23:57:47.990356672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:57:47.990970 containerd[1636]: time="2025-11-04T23:57:47.990483730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:48.953977 containerd[1636]: time="2025-11-04T23:57:48.953130972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:57:48.953977 containerd[1636]: time="2025-11-04T23:57:48.953268159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:48.975483 containerd[1636]: time="2025-11-04T23:57:48.975401521Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:48.987368 containerd[1636]: time="2025-11-04T23:57:48.987283183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:57:48.993382 containerd[1636]: time="2025-11-04T23:57:48.993303635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:57:49.004583 containerd[1636]: time="2025-11-04T23:57:49.004444790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.896422483s" Nov 4 23:57:49.004583 containerd[1636]: time="2025-11-04T23:57:49.004523634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:57:49.015806 containerd[1636]: time="2025-11-04T23:57:49.015739903Z" level=error msg="Failed to destroy network for sandbox \"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.024608 systemd[1]: run-netns-cni\x2df24bfe87\x2dcad2\x2d529d\x2da208\x2d37837531d148.mount: Deactivated successfully. Nov 4 23:57:49.026069 containerd[1636]: time="2025-11-04T23:57:49.025805373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.031276 containerd[1636]: time="2025-11-04T23:57:49.031218794Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:57:49.038897 kubelet[2839]: E1104 23:57:49.038818 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.039498 kubelet[2839]: E1104 23:57:49.038938 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:49.039498 kubelet[2839]: E1104 23:57:49.038978 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k7w8" Nov 4 23:57:49.039498 kubelet[2839]: E1104 23:57:49.039062 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"343fbdb02e70309883271800c809c78e28f523446a867eb1addf05787bd01ec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:57:49.085314 containerd[1636]: time="2025-11-04T23:57:49.085258406Z" level=info msg="Container b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:57:49.122494 containerd[1636]: time="2025-11-04T23:57:49.121919873Z" level=error msg="Failed to destroy network for sandbox \"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.131273 containerd[1636]: time="2025-11-04T23:57:49.131161421Z" level=error msg="Failed to destroy network for sandbox \"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.136813 containerd[1636]: time="2025-11-04T23:57:49.136746756Z" level=error msg="Failed to destroy network for sandbox \"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.146970 containerd[1636]: time="2025-11-04T23:57:49.146859859Z" level=error msg="Failed to destroy network for sandbox \"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.346059 containerd[1636]: time="2025-11-04T23:57:49.345976078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.346376 kubelet[2839]: E1104 23:57:49.346316 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.346468 kubelet[2839]: E1104 23:57:49.346406 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:49.346468 kubelet[2839]: E1104 23:57:49.346438 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" Nov 4 23:57:49.346540 kubelet[2839]: E1104 23:57:49.346510 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"050152704df02e29aa64bdbb13a218a3a140d74b15015af7d55523891d650fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:57:49.348458 containerd[1636]: time="2025-11-04T23:57:49.348384563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.348812 kubelet[2839]: E1104 23:57:49.348759 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.348898 kubelet[2839]: E1104 23:57:49.348842 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:49.348898 kubelet[2839]: E1104 23:57:49.348870 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" Nov 4 23:57:49.349015 kubelet[2839]: E1104 23:57:49.348932 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"266d86aa4389456252e6b084dc7db5fba910a71f8408fdf7e0e095b899de449f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:57:49.350463 containerd[1636]: time="2025-11-04T23:57:49.350316661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.350710 kubelet[2839]: E1104 23:57:49.350556 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.350710 kubelet[2839]: E1104 23:57:49.350602 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:49.350710 kubelet[2839]: E1104 23:57:49.350621 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" Nov 4 23:57:49.350962 kubelet[2839]: E1104 23:57:49.350664 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dec097895a16646208d632c79fc7077ab9120ce03d60f549178053a957910e54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:57:49.362849 containerd[1636]: time="2025-11-04T23:57:49.362766670Z" level=info msg="CreateContainer within sandbox \"ee71c1be839484c97c458654bb0c704f5d3b5046419b0a767365916e40b67edf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\"" Nov 4 23:57:49.363729 containerd[1636]: time="2025-11-04T23:57:49.363692020Z" level=info msg="StartContainer for \"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\"" Nov 4 23:57:49.365494 containerd[1636]: time="2025-11-04T23:57:49.365412176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.365738 kubelet[2839]: E1104 23:57:49.365692 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:49.365832 kubelet[2839]: E1104 23:57:49.365763 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:49.365832 kubelet[2839]: E1104 23:57:49.365788 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-85l6w" Nov 4 23:57:49.365902 kubelet[2839]: E1104 23:57:49.365873 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b3706e340fb51e24c0b9831b62ff09dc7a8a0fbc63d6a8c7be7e1c6a4bb585f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:57:49.366978 containerd[1636]: time="2025-11-04T23:57:49.366875863Z" level=info msg="connecting to shim b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455" address="unix:///run/containerd/s/3819276a7716797166f43539df7c0e731888c8de9003867db46fb33e689775ef" protocol=ttrpc version=3 Nov 4 23:57:49.396103 systemd[1]: Started cri-containerd-b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455.scope - libcontainer container b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455. Nov 4 23:57:49.579816 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:57:49.581695 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:57:49.743123 containerd[1636]: time="2025-11-04T23:57:49.742986057Z" level=info msg="StartContainer for \"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\" returns successfully" Nov 4 23:57:49.898567 systemd[1]: run-netns-cni\x2def65b2ed\x2d85e3\x2dbcbf\x2d586b\x2dce659f49efd7.mount: Deactivated successfully. Nov 4 23:57:49.898707 systemd[1]: run-netns-cni\x2d98319345\x2dea4f\x2d6bbe\x2dd6c1\x2d0a6883f9353b.mount: Deactivated successfully. Nov 4 23:57:49.898788 systemd[1]: run-netns-cni\x2d33a831f9\x2db791\x2d16db\x2d66d2\x2debdef38b2d44.mount: Deactivated successfully. Nov 4 23:57:49.898864 systemd[1]: run-netns-cni\x2da2704da0\x2de532\x2d9ea1\x2d23c5\x2da826cbce4047.mount: Deactivated successfully. Nov 4 23:57:49.991153 kubelet[2839]: E1104 23:57:49.990824 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:49.991362 containerd[1636]: time="2025-11-04T23:57:49.990916935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcd588cdf-fqzbh,Uid:d4967576-a017-4231-9dac-e0dcfb7a3e59,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:49.991433 containerd[1636]: time="2025-11-04T23:57:49.991345388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:50.139636 kubelet[2839]: E1104 23:57:50.139292 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:50.274518 containerd[1636]: time="2025-11-04T23:57:50.274466945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\" id:\"222b2b88e5bc24443ee597aa74036338c66cbb5cea149b5c0bf88d0c0139aba4\" pid:4158 exit_status:1 exited_at:{seconds:1762300670 nanos:274100353}" Nov 4 23:57:50.348196 containerd[1636]: time="2025-11-04T23:57:50.348136207Z" level=error msg="Failed to destroy network for sandbox \"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:50.351080 systemd[1]: run-netns-cni\x2dd425b684\x2def09\x2d03b5\x2d15bc\x2dce419ea2a147.mount: Deactivated successfully. Nov 4 23:57:50.747807 containerd[1636]: time="2025-11-04T23:57:50.747743722Z" level=error msg="Failed to destroy network for sandbox \"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:50.750875 systemd[1]: run-netns-cni\x2db2f211fa\x2dd629\x2d1962\x2d454b\x2d26f88a59087c.mount: Deactivated successfully. Nov 4 23:57:50.853710 kubelet[2839]: I1104 23:57:50.853615 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9m5mc" podStartSLOduration=2.5978550990000002 podStartE2EDuration="28.853589525s" podCreationTimestamp="2025-11-04 23:57:22 +0000 UTC" firstStartedPulling="2025-11-04 23:57:22.754406285 +0000 UTC m=+18.898895690" lastFinishedPulling="2025-11-04 23:57:49.010140711 +0000 UTC m=+45.154630116" observedRunningTime="2025-11-04 23:57:50.849822282 +0000 UTC m=+46.994311687" watchObservedRunningTime="2025-11-04 23:57:50.853589525 +0000 UTC m=+46.998078920" Nov 4 23:57:50.989788 kubelet[2839]: E1104 23:57:50.989722 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:50.990199 containerd[1636]: time="2025-11-04T23:57:50.990165948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,}" Nov 4 23:57:51.140750 kubelet[2839]: E1104 23:57:51.140707 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:57:51.224241 containerd[1636]: time="2025-11-04T23:57:51.224193653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\" id:\"8499f575ee3b81ae2e9c4c2d0436398aef8b888ec8b783d2352592d55101bf5a\" pid:4246 exit_status:1 exited_at:{seconds:1762300671 nanos:223891146}" Nov 4 23:57:51.403690 containerd[1636]: time="2025-11-04T23:57:51.403483972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcd588cdf-fqzbh,Uid:d4967576-a017-4231-9dac-e0dcfb7a3e59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.404338 kubelet[2839]: E1104 23:57:51.404037 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.404338 kubelet[2839]: E1104 23:57:51.404163 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:51.404338 kubelet[2839]: E1104 23:57:51.404194 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcd588cdf-fqzbh" Nov 4 23:57:51.404509 kubelet[2839]: E1104 23:57:51.404276 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bcd588cdf-fqzbh_calico-system(d4967576-a017-4231-9dac-e0dcfb7a3e59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bcd588cdf-fqzbh_calico-system(d4967576-a017-4231-9dac-e0dcfb7a3e59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f748dcf2303f95c3de26334aea513149faca083f2ee3c1bddadcf0ec7e212d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bcd588cdf-fqzbh" podUID="d4967576-a017-4231-9dac-e0dcfb7a3e59" Nov 4 23:57:51.515111 containerd[1636]: time="2025-11-04T23:57:51.515011836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.515366 kubelet[2839]: E1104 23:57:51.515318 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.515415 kubelet[2839]: E1104 23:57:51.515394 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:51.515451 kubelet[2839]: E1104 23:57:51.515431 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xvn9q" Nov 4 23:57:51.515540 kubelet[2839]: E1104 23:57:51.515506 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xvn9q_kube-system(17151500-682c-4d03-96a4-c629a2968c8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xvn9q_kube-system(17151500-682c-4d03-96a4-c629a2968c8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64daf9db7c7b14cb442d4029821c1df532ef8408bf6f7455cb788bc5ad98449c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xvn9q" podUID="17151500-682c-4d03-96a4-c629a2968c8a" Nov 4 23:57:51.750792 containerd[1636]: time="2025-11-04T23:57:51.750631777Z" level=error msg="Failed to destroy network for sandbox \"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.753329 systemd[1]: run-netns-cni\x2da7182ad4\x2d66e0\x2dd18b\x2d012b\x2db3151f9f45e8.mount: Deactivated successfully. Nov 4 23:57:51.796400 containerd[1636]: time="2025-11-04T23:57:51.796331984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.797564 kubelet[2839]: E1104 23:57:51.797485 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:57:51.797715 kubelet[2839]: E1104 23:57:51.797598 2839 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:51.797715 kubelet[2839]: E1104 23:57:51.797626 2839 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x89z7" Nov 4 23:57:51.798098 kubelet[2839]: E1104 23:57:51.797695 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x89z7_kube-system(6fa3182c-166f-4b9d-a6cd-5926136039a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x89z7_kube-system(6fa3182c-166f-4b9d-a6cd-5926136039a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6aee9db77e92e35602764c54e562194b996ab2dbe842b2b475686acc98070e61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x89z7" podUID="6fa3182c-166f-4b9d-a6cd-5926136039a6" Nov 4 23:57:52.211517 kubelet[2839]: I1104 23:57:52.211409 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rnt2\" (UniqueName: \"kubernetes.io/projected/d4967576-a017-4231-9dac-e0dcfb7a3e59-kube-api-access-5rnt2\") pod \"d4967576-a017-4231-9dac-e0dcfb7a3e59\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " Nov 4 23:57:52.211517 kubelet[2839]: I1104 23:57:52.211490 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-backend-key-pair\") pod \"d4967576-a017-4231-9dac-e0dcfb7a3e59\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " Nov 4 23:57:52.211517 kubelet[2839]: I1104 23:57:52.211513 2839 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-ca-bundle\") pod \"d4967576-a017-4231-9dac-e0dcfb7a3e59\" (UID: \"d4967576-a017-4231-9dac-e0dcfb7a3e59\") " Nov 4 23:57:52.212174 kubelet[2839]: I1104 23:57:52.212144 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d4967576-a017-4231-9dac-e0dcfb7a3e59" (UID: "d4967576-a017-4231-9dac-e0dcfb7a3e59"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:57:52.216368 kubelet[2839]: I1104 23:57:52.216306 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d4967576-a017-4231-9dac-e0dcfb7a3e59" (UID: "d4967576-a017-4231-9dac-e0dcfb7a3e59"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:57:52.216435 kubelet[2839]: I1104 23:57:52.216318 2839 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4967576-a017-4231-9dac-e0dcfb7a3e59-kube-api-access-5rnt2" (OuterVolumeSpecName: "kube-api-access-5rnt2") pod "d4967576-a017-4231-9dac-e0dcfb7a3e59" (UID: "d4967576-a017-4231-9dac-e0dcfb7a3e59"). InnerVolumeSpecName "kube-api-access-5rnt2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:57:52.217407 systemd[1]: var-lib-kubelet-pods-d4967576\x2da017\x2d4231\x2d9dac\x2de0dcfb7a3e59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rnt2.mount: Deactivated successfully. Nov 4 23:57:52.217544 systemd[1]: var-lib-kubelet-pods-d4967576\x2da017\x2d4231\x2d9dac\x2de0dcfb7a3e59-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:57:52.312878 kubelet[2839]: I1104 23:57:52.312795 2839 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 23:57:52.312878 kubelet[2839]: I1104 23:57:52.312844 2839 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4967576-a017-4231-9dac-e0dcfb7a3e59-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 23:57:52.312878 kubelet[2839]: I1104 23:57:52.312853 2839 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rnt2\" (UniqueName: \"kubernetes.io/projected/d4967576-a017-4231-9dac-e0dcfb7a3e59-kube-api-access-5rnt2\") on node \"localhost\" DevicePath \"\"" Nov 4 23:57:52.420130 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:38686.service - OpenSSH per-connection server daemon (10.0.0.1:38686). Nov 4 23:57:52.517366 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 38686 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:57:52.520418 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:52.527646 systemd-logind[1614]: New session 8 of user core. Nov 4 23:57:52.533166 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:57:52.689245 sshd[4314]: Connection closed by 10.0.0.1 port 38686 Nov 4 23:57:52.689574 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:52.693723 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:38686.service: Deactivated successfully. Nov 4 23:57:52.695858 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:57:52.697397 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:57:52.698836 systemd-logind[1614]: Removed session 8. Nov 4 23:57:53.152074 systemd[1]: Removed slice kubepods-besteffort-podd4967576_a017_4231_9dac_e0dcfb7a3e59.slice - libcontainer container kubepods-besteffort-podd4967576_a017_4231_9dac_e0dcfb7a3e59.slice. Nov 4 23:57:53.332862 systemd[1]: Created slice kubepods-besteffort-poda0b91814_1cb6_4264_9193_77ae0565f373.slice - libcontainer container kubepods-besteffort-poda0b91814_1cb6_4264_9193_77ae0565f373.slice. Nov 4 23:57:53.421309 kubelet[2839]: I1104 23:57:53.421136 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kc8\" (UniqueName: \"kubernetes.io/projected/a0b91814-1cb6-4264-9193-77ae0565f373-kube-api-access-r6kc8\") pod \"whisker-59cbc8bc7c-h2424\" (UID: \"a0b91814-1cb6-4264-9193-77ae0565f373\") " pod="calico-system/whisker-59cbc8bc7c-h2424" Nov 4 23:57:53.421309 kubelet[2839]: I1104 23:57:53.421191 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a0b91814-1cb6-4264-9193-77ae0565f373-whisker-backend-key-pair\") pod \"whisker-59cbc8bc7c-h2424\" (UID: \"a0b91814-1cb6-4264-9193-77ae0565f373\") " pod="calico-system/whisker-59cbc8bc7c-h2424" Nov 4 23:57:53.421309 kubelet[2839]: I1104 23:57:53.421211 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0b91814-1cb6-4264-9193-77ae0565f373-whisker-ca-bundle\") pod \"whisker-59cbc8bc7c-h2424\" (UID: \"a0b91814-1cb6-4264-9193-77ae0565f373\") " pod="calico-system/whisker-59cbc8bc7c-h2424" Nov 4 23:57:53.636979 containerd[1636]: time="2025-11-04T23:57:53.636720519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cbc8bc7c-h2424,Uid:a0b91814-1cb6-4264-9193-77ae0565f373,Namespace:calico-system,Attempt:0,}" Nov 4 23:57:53.992701 kubelet[2839]: I1104 23:57:53.992640 2839 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4967576-a017-4231-9dac-e0dcfb7a3e59" path="/var/lib/kubelet/pods/d4967576-a017-4231-9dac-e0dcfb7a3e59/volumes" Nov 4 23:57:54.148934 systemd-networkd[1533]: vxlan.calico: Link UP Nov 4 23:57:54.149410 systemd-networkd[1533]: vxlan.calico: Gained carrier Nov 4 23:57:54.194184 systemd-networkd[1533]: caliec5c5b37d03: Link UP Nov 4 23:57:54.195592 systemd-networkd[1533]: caliec5c5b37d03: Gained carrier Nov 4 23:57:54.400158 containerd[1636]: 2025-11-04 23:57:53.811 [INFO][4454] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59cbc8bc7c--h2424-eth0 whisker-59cbc8bc7c- calico-system a0b91814-1cb6-4264-9193-77ae0565f373 990 0 2025-11-04 23:57:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cbc8bc7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59cbc8bc7c-h2424 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliec5c5b37d03 [] [] }} ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-" Nov 4 23:57:54.400158 containerd[1636]: 2025-11-04 23:57:53.812 [INFO][4454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.400158 containerd[1636]: 2025-11-04 23:57:53.896 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" HandleID="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Workload="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.897 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" HandleID="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Workload="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59cbc8bc7c-h2424", "timestamp":"2025-11-04 23:57:53.896454839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.897 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.897 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.898 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.907 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" host="localhost" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.913 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.917 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.919 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.977 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:57:54.400481 containerd[1636]: 2025-11-04 23:57:53.977 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" host="localhost" Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:53.982 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61 Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:54.035 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" host="localhost" Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:54.176 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" host="localhost" Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:54.176 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" host="localhost" Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:54.176 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:57:54.400855 containerd[1636]: 2025-11-04 23:57:54.176 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" HandleID="k8s-pod-network.71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Workload="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.401067 containerd[1636]: 2025-11-04 23:57:54.182 [INFO][4454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cbc8bc7c--h2424-eth0", GenerateName:"whisker-59cbc8bc7c-", Namespace:"calico-system", SelfLink:"", UID:"a0b91814-1cb6-4264-9193-77ae0565f373", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cbc8bc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59cbc8bc7c-h2424", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliec5c5b37d03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:57:54.401067 containerd[1636]: 2025-11-04 23:57:54.183 [INFO][4454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.401175 containerd[1636]: 2025-11-04 23:57:54.183 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec5c5b37d03 ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.401175 containerd[1636]: 2025-11-04 23:57:54.196 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.401229 containerd[1636]: 2025-11-04 23:57:54.198 [INFO][4454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59cbc8bc7c--h2424-eth0", GenerateName:"whisker-59cbc8bc7c-", Namespace:"calico-system", SelfLink:"", UID:"a0b91814-1cb6-4264-9193-77ae0565f373", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cbc8bc7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61", Pod:"whisker-59cbc8bc7c-h2424", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliec5c5b37d03", MAC:"26:fc:e1:ea:fe:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:57:54.401299 containerd[1636]: 2025-11-04 23:57:54.392 [INFO][4454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" Namespace="calico-system" Pod="whisker-59cbc8bc7c-h2424" WorkloadEndpoint="localhost-k8s-whisker--59cbc8bc7c--h2424-eth0" Nov 4 23:57:54.647007 containerd[1636]: time="2025-11-04T23:57:54.646906781Z" level=info msg="connecting to shim 71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61" address="unix:///run/containerd/s/b2f525c40dbaff91609b379c61ab457ab2d78748ba9aabab43cfd082f09968ee" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:57:54.678138 systemd[1]: Started cri-containerd-71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61.scope - libcontainer container 71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61. Nov 4 23:57:54.692870 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:57:54.728468 containerd[1636]: time="2025-11-04T23:57:54.728393797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cbc8bc7c-h2424,Uid:a0b91814-1cb6-4264-9193-77ae0565f373,Namespace:calico-system,Attempt:0,} returns sandbox id \"71779db8468e0b0725c1442b7daaa53d78756ec96c64a9defeab8f372e4f6d61\"" Nov 4 23:57:54.734298 containerd[1636]: time="2025-11-04T23:57:54.734250746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:57:55.124967 containerd[1636]: time="2025-11-04T23:57:55.124900835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:55.126184 containerd[1636]: time="2025-11-04T23:57:55.126152820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:57:55.130480 containerd[1636]: time="2025-11-04T23:57:55.130438078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:57:55.130830 kubelet[2839]: E1104 23:57:55.130762 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:55.131235 kubelet[2839]: E1104 23:57:55.130853 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:55.135782 kubelet[2839]: E1104 23:57:55.135727 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8dc219001274e2abcef068b56e38a59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:55.137825 containerd[1636]: time="2025-11-04T23:57:55.137740639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:57:55.301192 systemd-networkd[1533]: vxlan.calico: Gained IPv6LL Nov 4 23:57:55.365240 systemd-networkd[1533]: caliec5c5b37d03: Gained IPv6LL Nov 4 23:57:55.437755 containerd[1636]: time="2025-11-04T23:57:55.437579883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:55.451406 containerd[1636]: time="2025-11-04T23:57:55.451307387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:57:55.451585 containerd[1636]: time="2025-11-04T23:57:55.451452849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:57:55.451691 kubelet[2839]: E1104 23:57:55.451636 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:55.451739 kubelet[2839]: E1104 23:57:55.451699 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:55.451925 kubelet[2839]: E1104 23:57:55.451867 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:55.453425 kubelet[2839]: E1104 23:57:55.453342 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:57:56.154603 kubelet[2839]: E1104 23:57:56.154515 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:57:57.717562 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:50700.service - OpenSSH per-connection server daemon (10.0.0.1:50700). Nov 4 23:57:57.791517 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 50700 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:57:57.794300 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:57.799287 systemd-logind[1614]: New session 9 of user core. Nov 4 23:57:57.809145 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:57:57.948734 sshd[4627]: Connection closed by 10.0.0.1 port 50700 Nov 4 23:57:57.949161 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:57.954332 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:50700.service: Deactivated successfully. Nov 4 23:57:57.956977 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:57:57.958003 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:57:57.959912 systemd-logind[1614]: Removed session 9. Nov 4 23:58:00.990796 containerd[1636]: time="2025-11-04T23:58:00.990699370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,}" Nov 4 23:58:01.300223 systemd-networkd[1533]: cali0441f7484f4: Link UP Nov 4 23:58:01.301219 systemd-networkd[1533]: cali0441f7484f4: Gained carrier Nov 4 23:58:01.318847 containerd[1636]: 2025-11-04 23:58:01.243 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7k7w8-eth0 csi-node-driver- calico-system 9096f1c3-7da9-48d9-beff-7b6f2057f511 721 0 2025-11-04 23:57:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7k7w8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0441f7484f4 [] [] }} ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-" Nov 4 23:58:01.318847 containerd[1636]: 2025-11-04 23:58:01.243 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.318847 containerd[1636]: 2025-11-04 23:58:01.268 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" HandleID="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Workload="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.268 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" HandleID="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Workload="localhost-k8s-csi--node--driver--7k7w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7k7w8", "timestamp":"2025-11-04 23:58:01.268004767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.268 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.268 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.268 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.273 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" host="localhost" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.277 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.280 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.281 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.283 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:01.319118 containerd[1636]: 2025-11-04 23:58:01.283 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" host="localhost" Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.285 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.289 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" host="localhost" Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.293 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" host="localhost" Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.293 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" host="localhost" Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.293 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:01.319423 containerd[1636]: 2025-11-04 23:58:01.293 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" HandleID="k8s-pod-network.9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Workload="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.319628 containerd[1636]: 2025-11-04 23:58:01.296 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7k7w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9096f1c3-7da9-48d9-beff-7b6f2057f511", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7k7w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0441f7484f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:01.319707 containerd[1636]: 2025-11-04 23:58:01.296 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.319707 containerd[1636]: 2025-11-04 23:58:01.296 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0441f7484f4 ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.319707 containerd[1636]: 2025-11-04 23:58:01.301 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.320078 containerd[1636]: 2025-11-04 23:58:01.302 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7k7w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9096f1c3-7da9-48d9-beff-7b6f2057f511", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb", Pod:"csi-node-driver-7k7w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0441f7484f4", MAC:"fa:9a:bb:7d:ca:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:01.320282 containerd[1636]: 2025-11-04 23:58:01.313 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" Namespace="calico-system" Pod="csi-node-driver-7k7w8" WorkloadEndpoint="localhost-k8s-csi--node--driver--7k7w8-eth0" Nov 4 23:58:01.340845 containerd[1636]: time="2025-11-04T23:58:01.340751046Z" level=info msg="connecting to shim 9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb" address="unix:///run/containerd/s/15de03b0710eadb7b73e3f017efa042cfc0492dccce4ada4b5464ba5a335edc8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:01.369107 systemd[1]: Started cri-containerd-9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb.scope - libcontainer container 9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb. Nov 4 23:58:01.381709 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:01.398782 containerd[1636]: time="2025-11-04T23:58:01.398730727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k7w8,Uid:9096f1c3-7da9-48d9-beff-7b6f2057f511,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a2277b944e8cd7981a8ecdcd4040d844f5cc8773e64dfd4a028692c41d6e2fb\"" Nov 4 23:58:01.400214 containerd[1636]: time="2025-11-04T23:58:01.400180133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:58:01.730382 containerd[1636]: time="2025-11-04T23:58:01.730218657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:01.731631 containerd[1636]: time="2025-11-04T23:58:01.731476583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:58:01.731631 containerd[1636]: time="2025-11-04T23:58:01.731571396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:58:01.731980 kubelet[2839]: E1104 23:58:01.731745 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:01.731980 kubelet[2839]: E1104 23:58:01.731810 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:01.732388 kubelet[2839]: E1104 23:58:01.731983 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:01.734508 containerd[1636]: time="2025-11-04T23:58:01.734456983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:58:02.087074 containerd[1636]: time="2025-11-04T23:58:02.086986621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:02.088479 containerd[1636]: time="2025-11-04T23:58:02.088425947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:58:02.088531 containerd[1636]: time="2025-11-04T23:58:02.088509979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:58:02.088752 kubelet[2839]: E1104 23:58:02.088686 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:02.088816 kubelet[2839]: E1104 23:58:02.088773 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:02.089008 kubelet[2839]: E1104 23:58:02.088960 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:02.090266 kubelet[2839]: E1104 23:58:02.090195 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:02.166454 kubelet[2839]: E1104 23:58:02.166393 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:02.725144 systemd-networkd[1533]: cali0441f7484f4: Gained IPv6LL Nov 4 23:58:02.968406 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:50704.service - OpenSSH per-connection server daemon (10.0.0.1:50704). Nov 4 23:58:02.990847 containerd[1636]: time="2025-11-04T23:58:02.990694268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,}" Nov 4 23:58:03.036121 sshd[4729]: Accepted publickey for core from 10.0.0.1 port 50704 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:03.037979 sshd-session[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:03.042672 systemd-logind[1614]: New session 10 of user core. Nov 4 23:58:03.052185 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:58:03.172979 kubelet[2839]: E1104 23:58:03.171196 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:03.313714 sshd[4732]: Connection closed by 10.0.0.1 port 50704 Nov 4 23:58:03.314366 sshd-session[4729]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:03.319472 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:50704.service: Deactivated successfully. Nov 4 23:58:03.322705 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:58:03.327258 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:58:03.328200 systemd-logind[1614]: Removed session 10. Nov 4 23:58:03.339589 systemd-networkd[1533]: cali9b7c6b7ac45: Link UP Nov 4 23:58:03.340536 systemd-networkd[1533]: cali9b7c6b7ac45: Gained carrier Nov 4 23:58:03.356151 containerd[1636]: 2025-11-04 23:58:03.146 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0 calico-kube-controllers-78cc59d946- calico-system adf22170-0a60-4bf0-be14-045d1e27faa2 856 0 2025-11-04 23:57:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78cc59d946 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78cc59d946-ppm4m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9b7c6b7ac45 [] [] }} ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-" Nov 4 23:58:03.356151 containerd[1636]: 2025-11-04 23:58:03.146 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.356151 containerd[1636]: 2025-11-04 23:58:03.187 [INFO][4756] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" HandleID="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Workload="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.187 [INFO][4756] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" HandleID="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Workload="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78cc59d946-ppm4m", "timestamp":"2025-11-04 23:58:03.187376517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.187 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.188 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.188 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.305 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" host="localhost" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.313 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.318 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.320 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.322 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:03.356676 containerd[1636]: 2025-11-04 23:58:03.322 [INFO][4756] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" host="localhost" Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.323 [INFO][4756] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7 Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.326 [INFO][4756] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" host="localhost" Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.333 [INFO][4756] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" host="localhost" Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.333 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" host="localhost" Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.333 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:03.356911 containerd[1636]: 2025-11-04 23:58:03.333 [INFO][4756] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" HandleID="k8s-pod-network.b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Workload="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.357137 containerd[1636]: 2025-11-04 23:58:03.336 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0", GenerateName:"calico-kube-controllers-78cc59d946-", Namespace:"calico-system", SelfLink:"", UID:"adf22170-0a60-4bf0-be14-045d1e27faa2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cc59d946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78cc59d946-ppm4m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b7c6b7ac45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:03.357199 containerd[1636]: 2025-11-04 23:58:03.336 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.357199 containerd[1636]: 2025-11-04 23:58:03.336 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b7c6b7ac45 ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.357199 containerd[1636]: 2025-11-04 23:58:03.340 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.357265 containerd[1636]: 2025-11-04 23:58:03.341 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0", GenerateName:"calico-kube-controllers-78cc59d946-", Namespace:"calico-system", SelfLink:"", UID:"adf22170-0a60-4bf0-be14-045d1e27faa2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cc59d946", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7", Pod:"calico-kube-controllers-78cc59d946-ppm4m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b7c6b7ac45", MAC:"6a:55:70:bc:c8:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:03.357320 containerd[1636]: 2025-11-04 23:58:03.351 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" Namespace="calico-system" Pod="calico-kube-controllers-78cc59d946-ppm4m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cc59d946--ppm4m-eth0" Nov 4 23:58:03.410760 containerd[1636]: time="2025-11-04T23:58:03.410697318Z" level=info msg="connecting to shim b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7" address="unix:///run/containerd/s/574ddb0bd6685c4f47b583fb2e309f59708032a00d9849bdbae7d2aa35e52f67" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:03.444310 systemd[1]: Started cri-containerd-b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7.scope - libcontainer container b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7. Nov 4 23:58:03.460003 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:03.492133 containerd[1636]: time="2025-11-04T23:58:03.492084913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cc59d946-ppm4m,Uid:adf22170-0a60-4bf0-be14-045d1e27faa2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2659833773cad749f51709b1f60787cd124689a65bf022fb292c1f5292c4fe7\"" Nov 4 23:58:03.494178 containerd[1636]: time="2025-11-04T23:58:03.494138901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:58:03.849511 containerd[1636]: time="2025-11-04T23:58:03.849425772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:03.850814 containerd[1636]: time="2025-11-04T23:58:03.850759532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:58:03.850968 containerd[1636]: time="2025-11-04T23:58:03.850891466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:03.851155 kubelet[2839]: E1104 23:58:03.851103 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:03.851241 kubelet[2839]: E1104 23:58:03.851166 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:03.851395 kubelet[2839]: E1104 23:58:03.851336 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:03.852584 kubelet[2839]: E1104 23:58:03.852528 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:03.990576 containerd[1636]: time="2025-11-04T23:58:03.990499262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:58:03.991112 containerd[1636]: time="2025-11-04T23:58:03.990998885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,}" Nov 4 23:58:03.991270 containerd[1636]: time="2025-11-04T23:58:03.991149164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:58:04.171554 kubelet[2839]: E1104 23:58:04.171408 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:04.389199 systemd-networkd[1533]: cali9b7c6b7ac45: Gained IPv6LL Nov 4 23:58:04.473576 systemd-networkd[1533]: calicfbd0749cc8: Link UP Nov 4 23:58:04.474660 systemd-networkd[1533]: calicfbd0749cc8: Gained carrier Nov 4 23:58:04.709842 containerd[1636]: 2025-11-04 23:58:04.060 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0 calico-apiserver-5d7b6c5897- calico-apiserver b35658f8-29c0-438d-8549-d61428e8d39f 850 0 2025-11-04 23:57:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7b6c5897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7b6c5897-d22xf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfbd0749cc8 [] [] }} ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-" Nov 4 23:58:04.709842 containerd[1636]: 2025-11-04 23:58:04.060 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.709842 containerd[1636]: 2025-11-04 23:58:04.102 [INFO][4875] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" HandleID="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.102 [INFO][4875] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" HandleID="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7b6c5897-d22xf", "timestamp":"2025-11-04 23:58:04.102148949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.102 [INFO][4875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.102 [INFO][4875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.102 [INFO][4875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.109 [INFO][4875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" host="localhost" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.113 [INFO][4875] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.118 [INFO][4875] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.122 [INFO][4875] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.124 [INFO][4875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.711014 containerd[1636]: 2025-11-04 23:58:04.125 [INFO][4875] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" host="localhost" Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.126 [INFO][4875] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69 Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.209 [INFO][4875] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" host="localhost" Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4875] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" host="localhost" Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" host="localhost" Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:04.711302 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4875] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" HandleID="k8s-pod-network.b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.711468 containerd[1636]: 2025-11-04 23:58:04.469 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0", GenerateName:"calico-apiserver-5d7b6c5897-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35658f8-29c0-438d-8549-d61428e8d39f", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b6c5897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7b6c5897-d22xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfbd0749cc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.711543 containerd[1636]: 2025-11-04 23:58:04.469 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.711543 containerd[1636]: 2025-11-04 23:58:04.469 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfbd0749cc8 ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.711543 containerd[1636]: 2025-11-04 23:58:04.474 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.711642 containerd[1636]: 2025-11-04 23:58:04.474 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0", GenerateName:"calico-apiserver-5d7b6c5897-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35658f8-29c0-438d-8549-d61428e8d39f", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b6c5897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69", Pod:"calico-apiserver-5d7b6c5897-d22xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfbd0749cc8", MAC:"72:0d:91:59:9e:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.711723 containerd[1636]: 2025-11-04 23:58:04.706 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-d22xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--d22xf-eth0" Nov 4 23:58:04.747160 containerd[1636]: time="2025-11-04T23:58:04.746388095Z" level=info msg="connecting to shim b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69" address="unix:///run/containerd/s/8f9b95815b5efc44b81998ebff7f8327bb1da86cf91d6461f1d9367539a2544d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:04.750874 systemd-networkd[1533]: cali02972c8e508: Link UP Nov 4 23:58:04.752035 systemd-networkd[1533]: cali02972c8e508: Gained carrier Nov 4 23:58:04.769982 containerd[1636]: 2025-11-04 23:58:04.059 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0 calico-apiserver-5d7b6c5897- calico-apiserver 0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a 855 0 2025-11-04 23:57:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7b6c5897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7b6c5897-jq7ws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02972c8e508 [] [] }} ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-" Nov 4 23:58:04.769982 containerd[1636]: 2025-11-04 23:58:04.059 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.769982 containerd[1636]: 2025-11-04 23:58:04.106 [INFO][4879] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" HandleID="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.106 [INFO][4879] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" HandleID="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7b6c5897-jq7ws", "timestamp":"2025-11-04 23:58:04.106488907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.106 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.465 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.706 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" host="localhost" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.713 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.718 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.720 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.723 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.770319 containerd[1636]: 2025-11-04 23:58:04.723 [INFO][4879] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" host="localhost" Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.724 [INFO][4879] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64 Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.728 [INFO][4879] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" host="localhost" Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.738 [INFO][4879] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" host="localhost" Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.738 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" host="localhost" Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.738 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:04.770568 containerd[1636]: 2025-11-04 23:58:04.738 [INFO][4879] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" HandleID="k8s-pod-network.589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Workload="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.770687 containerd[1636]: 2025-11-04 23:58:04.743 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0", GenerateName:"calico-apiserver-5d7b6c5897-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b6c5897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7b6c5897-jq7ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02972c8e508", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.770745 containerd[1636]: 2025-11-04 23:58:04.744 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.770745 containerd[1636]: 2025-11-04 23:58:04.744 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02972c8e508 ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.770745 containerd[1636]: 2025-11-04 23:58:04.753 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.770823 containerd[1636]: 2025-11-04 23:58:04.754 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0", GenerateName:"calico-apiserver-5d7b6c5897-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b6c5897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64", Pod:"calico-apiserver-5d7b6c5897-jq7ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02972c8e508", MAC:"a2:a7:b6:8a:51:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.770887 containerd[1636]: 2025-11-04 23:58:04.765 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b6c5897-jq7ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7b6c5897--jq7ws-eth0" Nov 4 23:58:04.797230 systemd[1]: Started cri-containerd-b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69.scope - libcontainer container b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69. Nov 4 23:58:04.809981 containerd[1636]: time="2025-11-04T23:58:04.809838430Z" level=info msg="connecting to shim 589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64" address="unix:///run/containerd/s/b6bb328d511a4dc94089970bfb675a274a7d4681b92144b3d69b1822b5f4a58a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:04.822717 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:04.850434 systemd[1]: Started cri-containerd-589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64.scope - libcontainer container 589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64. Nov 4 23:58:04.862069 systemd-networkd[1533]: cali1d006cc98b6: Link UP Nov 4 23:58:04.863673 systemd-networkd[1533]: cali1d006cc98b6: Gained carrier Nov 4 23:58:04.877109 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:04.877355 containerd[1636]: time="2025-11-04T23:58:04.877220748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-d22xf,Uid:b35658f8-29c0-438d-8549-d61428e8d39f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7a44b10d6e5d88e8682bb3b8284544a425f71e2b4b4acb1a089e7bdb359df69\"" Nov 4 23:58:04.883345 containerd[1636]: time="2025-11-04T23:58:04.883261031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:04.884992 containerd[1636]: 2025-11-04 23:58:04.062 [INFO][4852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--85l6w-eth0 goldmane-666569f655- calico-system c5d1d235-24ef-43b3-abad-7fa9db4b88ef 857 0 2025-11-04 23:57:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-85l6w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1d006cc98b6 [] [] }} ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-" Nov 4 23:58:04.884992 containerd[1636]: 2025-11-04 23:58:04.063 [INFO][4852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.884992 containerd[1636]: 2025-11-04 23:58:04.108 [INFO][4878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" HandleID="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Workload="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.108 [INFO][4878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" HandleID="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Workload="localhost-k8s-goldmane--666569f655--85l6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-85l6w", "timestamp":"2025-11-04 23:58:04.108456627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.109 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.739 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.739 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.809 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" host="localhost" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.818 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.826 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.829 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.832 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:04.885170 containerd[1636]: 2025-11-04 23:58:04.832 [INFO][4878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" host="localhost" Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.834 [INFO][4878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7 Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.840 [INFO][4878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" host="localhost" Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.849 [INFO][4878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" host="localhost" Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.849 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" host="localhost" Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.849 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:04.885434 containerd[1636]: 2025-11-04 23:58:04.849 [INFO][4878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" HandleID="k8s-pod-network.94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Workload="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.885570 containerd[1636]: 2025-11-04 23:58:04.854 [INFO][4852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--85l6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c5d1d235-24ef-43b3-abad-7fa9db4b88ef", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-85l6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1d006cc98b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.885570 containerd[1636]: 2025-11-04 23:58:04.855 [INFO][4852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.885643 containerd[1636]: 2025-11-04 23:58:04.855 [INFO][4852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d006cc98b6 ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.885643 containerd[1636]: 2025-11-04 23:58:04.865 [INFO][4852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.885681 containerd[1636]: 2025-11-04 23:58:04.867 [INFO][4852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--85l6w-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c5d1d235-24ef-43b3-abad-7fa9db4b88ef", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7", Pod:"goldmane-666569f655-85l6w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1d006cc98b6", MAC:"b6:55:5f:28:81:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:04.885728 containerd[1636]: 2025-11-04 23:58:04.878 [INFO][4852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" Namespace="calico-system" Pod="goldmane-666569f655-85l6w" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--85l6w-eth0" Nov 4 23:58:04.917825 containerd[1636]: time="2025-11-04T23:58:04.917767530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b6c5897-jq7ws,Uid:0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"589d3da82bbcecc2b135069d1a3e0ee5b0f3bbc0fe663c1bcc5ee58c06f10a64\"" Nov 4 23:58:04.930840 containerd[1636]: time="2025-11-04T23:58:04.930714486Z" level=info msg="connecting to shim 94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7" address="unix:///run/containerd/s/f4a4aa9f8a4af19e8d3921f0df56415b91610ae897eae801c8aee57ec47dc039" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:04.967134 systemd[1]: Started cri-containerd-94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7.scope - libcontainer container 94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7. Nov 4 23:58:04.979747 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:04.989882 kubelet[2839]: E1104 23:58:04.989679 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:04.990241 containerd[1636]: time="2025-11-04T23:58:04.990214562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,}" Nov 4 23:58:05.019185 containerd[1636]: time="2025-11-04T23:58:05.019042447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-85l6w,Uid:c5d1d235-24ef-43b3-abad-7fa9db4b88ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"94bc310780b6e22f05527069040d1f1227a96c2cb4629b7bfdcf5201682b13b7\"" Nov 4 23:58:05.098162 systemd-networkd[1533]: cali2ac9734362b: Link UP Nov 4 23:58:05.099289 systemd-networkd[1533]: cali2ac9734362b: Gained carrier Nov 4 23:58:05.113560 containerd[1636]: 2025-11-04 23:58:05.031 [INFO][5062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0 coredns-674b8bbfcf- kube-system 17151500-682c-4d03-96a4-c629a2968c8a 848 0 2025-11-04 23:57:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xvn9q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ac9734362b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-" Nov 4 23:58:05.113560 containerd[1636]: 2025-11-04 23:58:05.031 [INFO][5062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.113560 containerd[1636]: 2025-11-04 23:58:05.058 [INFO][5082] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" HandleID="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Workload="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.058 [INFO][5082] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" HandleID="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Workload="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xvn9q", "timestamp":"2025-11-04 23:58:05.058060035 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.058 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.058 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.058 [INFO][5082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.065 [INFO][5082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" host="localhost" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.070 [INFO][5082] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.074 [INFO][5082] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.076 [INFO][5082] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.078 [INFO][5082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:05.113785 containerd[1636]: 2025-11-04 23:58:05.078 [INFO][5082] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" host="localhost" Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.080 [INFO][5082] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1 Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.083 [INFO][5082] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" host="localhost" Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.089 [INFO][5082] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" host="localhost" Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.089 [INFO][5082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" host="localhost" Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.089 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:05.114071 containerd[1636]: 2025-11-04 23:58:05.089 [INFO][5082] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" HandleID="k8s-pod-network.2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Workload="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.114281 containerd[1636]: 2025-11-04 23:58:05.094 [INFO][5062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17151500-682c-4d03-96a4-c629a2968c8a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xvn9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ac9734362b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:05.114352 containerd[1636]: 2025-11-04 23:58:05.094 [INFO][5062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.114352 containerd[1636]: 2025-11-04 23:58:05.094 [INFO][5062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ac9734362b ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.114352 containerd[1636]: 2025-11-04 23:58:05.100 [INFO][5062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.114482 containerd[1636]: 2025-11-04 23:58:05.100 [INFO][5062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"17151500-682c-4d03-96a4-c629a2968c8a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1", Pod:"coredns-674b8bbfcf-xvn9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ac9734362b", MAC:"9a:5b:b9:bb:8d:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:05.114482 containerd[1636]: 2025-11-04 23:58:05.110 [INFO][5062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-xvn9q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xvn9q-eth0" Nov 4 23:58:05.135648 containerd[1636]: time="2025-11-04T23:58:05.135578749Z" level=info msg="connecting to shim 2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1" address="unix:///run/containerd/s/1f4d253d54d987cfd155adc73a10459edcd184fc944a303be391936e30ebecfc" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:05.163139 systemd[1]: Started cri-containerd-2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1.scope - libcontainer container 2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1. Nov 4 23:58:05.177576 kubelet[2839]: E1104 23:58:05.177523 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:05.183036 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:05.217911 containerd[1636]: time="2025-11-04T23:58:05.217857185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xvn9q,Uid:17151500-682c-4d03-96a4-c629a2968c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1\"" Nov 4 23:58:05.219034 kubelet[2839]: E1104 23:58:05.218891 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:05.228108 containerd[1636]: time="2025-11-04T23:58:05.228041780Z" level=info msg="CreateContainer within sandbox \"2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:58:05.243932 containerd[1636]: time="2025-11-04T23:58:05.243432718Z" level=info msg="Container c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:58:05.248574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472457990.mount: Deactivated successfully. Nov 4 23:58:05.248767 containerd[1636]: time="2025-11-04T23:58:05.248657285Z" level=info msg="CreateContainer within sandbox \"2da91d19157284d376526d8a1555061342a16ee855a654709dff4c224e76f5d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f\"" Nov 4 23:58:05.249489 containerd[1636]: time="2025-11-04T23:58:05.249441735Z" level=info msg="StartContainer for \"c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f\"" Nov 4 23:58:05.250759 containerd[1636]: time="2025-11-04T23:58:05.250725246Z" level=info msg="connecting to shim c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f" address="unix:///run/containerd/s/1f4d253d54d987cfd155adc73a10459edcd184fc944a303be391936e30ebecfc" protocol=ttrpc version=3 Nov 4 23:58:05.278092 containerd[1636]: time="2025-11-04T23:58:05.277895930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:05.279321 systemd[1]: Started cri-containerd-c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f.scope - libcontainer container c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f. Nov 4 23:58:05.282361 containerd[1636]: time="2025-11-04T23:58:05.281665657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:05.282480 containerd[1636]: time="2025-11-04T23:58:05.281765459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:05.282595 kubelet[2839]: E1104 23:58:05.282557 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:05.282640 kubelet[2839]: E1104 23:58:05.282608 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:05.282882 kubelet[2839]: E1104 23:58:05.282819 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjpx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:05.284118 containerd[1636]: time="2025-11-04T23:58:05.284090925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:05.284405 kubelet[2839]: E1104 23:58:05.284367 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:58:05.322665 containerd[1636]: time="2025-11-04T23:58:05.322609654Z" level=info msg="StartContainer for \"c22e656b148f6b8695ecea083858bb7ec16b75e81909ab66b78cd6b51397962f\" returns successfully" Nov 4 23:58:05.639231 containerd[1636]: time="2025-11-04T23:58:05.639172427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:05.683199 containerd[1636]: time="2025-11-04T23:58:05.683119325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:05.683199 containerd[1636]: time="2025-11-04T23:58:05.683166386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:05.683473 kubelet[2839]: E1104 23:58:05.683436 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:05.683541 kubelet[2839]: E1104 23:58:05.683489 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:05.683829 kubelet[2839]: E1104 23:58:05.683748 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:05.683938 containerd[1636]: time="2025-11-04T23:58:05.683804895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:58:05.685034 kubelet[2839]: E1104 23:58:05.684991 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:58:05.733138 systemd-networkd[1533]: calicfbd0749cc8: Gained IPv6LL Nov 4 23:58:05.797111 systemd-networkd[1533]: cali02972c8e508: Gained IPv6LL Nov 4 23:58:05.989938 kubelet[2839]: E1104 23:58:05.989789 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:05.990882 containerd[1636]: time="2025-11-04T23:58:05.990386975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,}" Nov 4 23:58:06.034072 containerd[1636]: time="2025-11-04T23:58:06.034020598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:06.179509 kubelet[2839]: E1104 23:58:06.179451 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:58:06.179711 kubelet[2839]: E1104 23:58:06.179543 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:58:06.179937 kubelet[2839]: E1104 23:58:06.179911 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:06.190243 containerd[1636]: time="2025-11-04T23:58:06.190177311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:58:06.190534 containerd[1636]: time="2025-11-04T23:58:06.190226104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:06.192884 kubelet[2839]: E1104 23:58:06.190804 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:06.192884 kubelet[2839]: E1104 23:58:06.190867 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:06.193405 kubelet[2839]: E1104 23:58:06.193320 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsxpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:06.196603 kubelet[2839]: E1104 23:58:06.196530 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:58:06.311319 systemd-networkd[1533]: cali2ac9734362b: Gained IPv6LL Nov 4 23:58:06.566220 systemd-networkd[1533]: cali1d006cc98b6: Gained IPv6LL Nov 4 23:58:06.896145 kubelet[2839]: I1104 23:58:06.895414 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xvn9q" podStartSLOduration=56.895358368 podStartE2EDuration="56.895358368s" podCreationTimestamp="2025-11-04 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:58:06.894794203 +0000 UTC m=+63.039283619" watchObservedRunningTime="2025-11-04 23:58:06.895358368 +0000 UTC m=+63.039847773" Nov 4 23:58:06.952142 systemd-networkd[1533]: calibeec0d3cab6: Link UP Nov 4 23:58:06.952704 systemd-networkd[1533]: calibeec0d3cab6: Gained carrier Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.703 [INFO][5183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--x89z7-eth0 coredns-674b8bbfcf- kube-system 6fa3182c-166f-4b9d-a6cd-5926136039a6 852 0 2025-11-04 23:57:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-x89z7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibeec0d3cab6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.703 [INFO][5183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.909 [INFO][5197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" HandleID="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Workload="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.910 [INFO][5197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" HandleID="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Workload="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000596ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-x89z7", "timestamp":"2025-11-04 23:58:06.909961906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.911 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.911 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.911 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.919 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.924 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.929 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.931 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.933 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.933 [INFO][5197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.935 [INFO][5197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217 Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.938 [INFO][5197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.945 [INFO][5197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.945 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" host="localhost" Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.945 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:58:06.971863 containerd[1636]: 2025-11-04 23:58:06.945 [INFO][5197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" HandleID="k8s-pod-network.ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Workload="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.948 [INFO][5183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--x89z7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6fa3182c-166f-4b9d-a6cd-5926136039a6", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-x89z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeec0d3cab6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.949 [INFO][5183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.949 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibeec0d3cab6 ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.953 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.953 [INFO][5183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--x89z7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6fa3182c-166f-4b9d-a6cd-5926136039a6", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217", Pod:"coredns-674b8bbfcf-x89z7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeec0d3cab6", MAC:"a6:90:aa:64:41:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:58:06.972643 containerd[1636]: 2025-11-04 23:58:06.968 [INFO][5183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" Namespace="kube-system" Pod="coredns-674b8bbfcf-x89z7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--x89z7-eth0" Nov 4 23:58:07.008243 containerd[1636]: time="2025-11-04T23:58:07.008184213Z" level=info msg="connecting to shim ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217" address="unix:///run/containerd/s/529f0b80f73f86ba4f5ba0356ba237316bbc030699683960a2cb8bf5db10ffa9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:58:07.066147 systemd[1]: Started cri-containerd-ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217.scope - libcontainer container ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217. Nov 4 23:58:07.081439 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 23:58:07.116146 containerd[1636]: time="2025-11-04T23:58:07.116100919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x89z7,Uid:6fa3182c-166f-4b9d-a6cd-5926136039a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217\"" Nov 4 23:58:07.116915 kubelet[2839]: E1104 23:58:07.116881 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:07.121165 containerd[1636]: time="2025-11-04T23:58:07.121125723Z" level=info msg="CreateContainer within sandbox \"ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:58:07.134664 containerd[1636]: time="2025-11-04T23:58:07.134542525Z" level=info msg="Container f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:58:07.142012 containerd[1636]: time="2025-11-04T23:58:07.141888983Z" level=info msg="CreateContainer within sandbox \"ca8bdf6e7c9d4412cc0162aedff215307b4cf875a6e6e40406814bef5e6b3217\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50\"" Nov 4 23:58:07.144063 containerd[1636]: time="2025-11-04T23:58:07.142779104Z" level=info msg="StartContainer for \"f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50\"" Nov 4 23:58:07.145014 containerd[1636]: time="2025-11-04T23:58:07.144986019Z" level=info msg="connecting to shim f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50" address="unix:///run/containerd/s/529f0b80f73f86ba4f5ba0356ba237316bbc030699683960a2cb8bf5db10ffa9" protocol=ttrpc version=3 Nov 4 23:58:07.177183 systemd[1]: Started cri-containerd-f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50.scope - libcontainer container f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50. Nov 4 23:58:07.183678 kubelet[2839]: E1104 23:58:07.183636 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:07.185741 kubelet[2839]: E1104 23:58:07.185538 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:58:07.348609 containerd[1636]: time="2025-11-04T23:58:07.348567359Z" level=info msg="StartContainer for \"f8d9c040dde1201bc1c9d792f6aecfa01716fd144d14d88ce17e9d16cd87fb50\" returns successfully" Nov 4 23:58:08.187699 kubelet[2839]: E1104 23:58:08.187652 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:08.188189 kubelet[2839]: E1104 23:58:08.187845 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:08.309076 kubelet[2839]: I1104 23:58:08.308940 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x89z7" podStartSLOduration=58.308894901 podStartE2EDuration="58.308894901s" podCreationTimestamp="2025-11-04 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:58:08.307578681 +0000 UTC m=+64.452068086" watchObservedRunningTime="2025-11-04 23:58:08.308894901 +0000 UTC m=+64.453384306" Nov 4 23:58:08.331086 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:35772.service - OpenSSH per-connection server daemon (10.0.0.1:35772). Nov 4 23:58:08.399044 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 35772 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:08.400773 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:08.405530 systemd-logind[1614]: New session 11 of user core. Nov 4 23:58:08.417141 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:58:08.567768 sshd[5306]: Connection closed by 10.0.0.1 port 35772 Nov 4 23:58:08.568157 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:08.582348 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:35772.service: Deactivated successfully. Nov 4 23:58:08.585462 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:58:08.586560 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:58:08.592691 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:35780.service - OpenSSH per-connection server daemon (10.0.0.1:35780). Nov 4 23:58:08.593425 systemd-logind[1614]: Removed session 11. Nov 4 23:58:08.639911 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 35780 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:08.642287 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:08.647867 systemd-logind[1614]: New session 12 of user core. Nov 4 23:58:08.657105 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:58:08.829090 sshd[5323]: Connection closed by 10.0.0.1 port 35780 Nov 4 23:58:08.830163 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:08.842038 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:35780.service: Deactivated successfully. Nov 4 23:58:08.845050 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:58:08.848736 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:58:08.855262 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:35786.service - OpenSSH per-connection server daemon (10.0.0.1:35786). Nov 4 23:58:08.856044 systemd-logind[1614]: Removed session 12. Nov 4 23:58:08.914749 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 35786 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:08.916672 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:08.922041 systemd-logind[1614]: New session 13 of user core. Nov 4 23:58:08.936210 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:58:08.997744 systemd-networkd[1533]: calibeec0d3cab6: Gained IPv6LL Nov 4 23:58:09.076669 sshd[5339]: Connection closed by 10.0.0.1 port 35786 Nov 4 23:58:09.077147 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:09.082934 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:35786.service: Deactivated successfully. Nov 4 23:58:09.085797 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:58:09.086998 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:58:09.088754 systemd-logind[1614]: Removed session 13. Nov 4 23:58:09.189545 kubelet[2839]: E1104 23:58:09.189501 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:09.190080 kubelet[2839]: E1104 23:58:09.189610 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:10.191102 kubelet[2839]: E1104 23:58:10.191059 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:10.990684 containerd[1636]: time="2025-11-04T23:58:10.990415424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:58:11.193983 kubelet[2839]: E1104 23:58:11.193906 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:11.444678 containerd[1636]: time="2025-11-04T23:58:11.444577651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:11.445886 containerd[1636]: time="2025-11-04T23:58:11.445845093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:58:11.445997 containerd[1636]: time="2025-11-04T23:58:11.445921530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:58:11.446181 kubelet[2839]: E1104 23:58:11.446134 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:11.446249 kubelet[2839]: E1104 23:58:11.446190 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:11.446392 kubelet[2839]: E1104 23:58:11.446337 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8dc219001274e2abcef068b56e38a59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:11.449509 containerd[1636]: time="2025-11-04T23:58:11.449455023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:58:11.821257 containerd[1636]: time="2025-11-04T23:58:11.821122126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:11.992359 containerd[1636]: time="2025-11-04T23:58:11.992114656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:58:11.992359 containerd[1636]: time="2025-11-04T23:58:11.992289963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:11.994164 kubelet[2839]: E1104 23:58:11.994109 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:11.994253 kubelet[2839]: E1104 23:58:11.994177 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:11.994514 kubelet[2839]: E1104 23:58:11.994413 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:11.996290 kubelet[2839]: E1104 23:58:11.996159 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:58:14.093209 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:57184.service - OpenSSH per-connection server daemon (10.0.0.1:57184). Nov 4 23:58:14.147761 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 57184 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:14.149823 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:14.155586 systemd-logind[1614]: New session 14 of user core. Nov 4 23:58:14.165301 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:58:14.304112 sshd[5362]: Connection closed by 10.0.0.1 port 57184 Nov 4 23:58:14.304486 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:14.310371 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:57184.service: Deactivated successfully. Nov 4 23:58:14.313235 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:58:14.315237 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:58:14.316802 systemd-logind[1614]: Removed session 14. Nov 4 23:58:14.990761 containerd[1636]: time="2025-11-04T23:58:14.990710974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:58:15.780098 containerd[1636]: time="2025-11-04T23:58:15.780011784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:16.057204 containerd[1636]: time="2025-11-04T23:58:16.057118815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:58:16.057726 containerd[1636]: time="2025-11-04T23:58:16.057145706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:58:16.057803 kubelet[2839]: E1104 23:58:16.057426 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:16.057803 kubelet[2839]: E1104 23:58:16.057488 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:16.057803 kubelet[2839]: E1104 23:58:16.057653 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:16.059616 containerd[1636]: time="2025-11-04T23:58:16.059581713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:58:16.506237 containerd[1636]: time="2025-11-04T23:58:16.506085479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:16.599239 containerd[1636]: time="2025-11-04T23:58:16.599159344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:58:16.599394 containerd[1636]: time="2025-11-04T23:58:16.599175695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:58:16.599586 kubelet[2839]: E1104 23:58:16.599523 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:16.599672 kubelet[2839]: E1104 23:58:16.599598 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:16.599837 kubelet[2839]: E1104 23:58:16.599777 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:16.601639 kubelet[2839]: E1104 23:58:16.601578 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:16.990808 containerd[1636]: time="2025-11-04T23:58:16.990749949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:18.268470 containerd[1636]: time="2025-11-04T23:58:18.268374439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:18.290189 containerd[1636]: time="2025-11-04T23:58:18.290114184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:18.290342 containerd[1636]: time="2025-11-04T23:58:18.290178246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:18.290479 kubelet[2839]: E1104 23:58:18.290423 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:18.290479 kubelet[2839]: E1104 23:58:18.290484 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:18.290888 kubelet[2839]: E1104 23:58:18.290630 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjpx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:18.291895 kubelet[2839]: E1104 23:58:18.291792 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:58:19.322208 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Nov 4 23:58:19.379696 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:19.381748 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:19.387352 systemd-logind[1614]: New session 15 of user core. Nov 4 23:58:19.395129 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:58:19.527183 sshd[5392]: Connection closed by 10.0.0.1 port 57192 Nov 4 23:58:19.527594 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:19.531819 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:57192.service: Deactivated successfully. Nov 4 23:58:19.534969 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:58:19.537373 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:58:19.538420 systemd-logind[1614]: Removed session 15. Nov 4 23:58:20.991837 containerd[1636]: time="2025-11-04T23:58:20.991775535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:58:21.960301 containerd[1636]: time="2025-11-04T23:58:21.959986016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\" id:\"c03456adc342ecb520678e799fd10f16f1372c88964cae1e5a232537b2a322f0\" pid:5416 exit_status:1 exited_at:{seconds:1762300701 nanos:959456293}" Nov 4 23:58:21.990055 kubelet[2839]: E1104 23:58:21.989918 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:22.338321 containerd[1636]: time="2025-11-04T23:58:22.338224472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:22.401515 containerd[1636]: time="2025-11-04T23:58:22.401424514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:22.401689 containerd[1636]: time="2025-11-04T23:58:22.401508735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:58:22.402208 kubelet[2839]: E1104 23:58:22.401859 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:22.402208 kubelet[2839]: E1104 23:58:22.401961 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:22.402977 containerd[1636]: time="2025-11-04T23:58:22.402375663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:22.403338 kubelet[2839]: E1104 23:58:22.403252 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:22.404546 kubelet[2839]: E1104 23:58:22.404492 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:22.767605 containerd[1636]: time="2025-11-04T23:58:22.767431839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:22.796165 containerd[1636]: time="2025-11-04T23:58:22.796082315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:22.796350 containerd[1636]: time="2025-11-04T23:58:22.796124215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:22.796496 kubelet[2839]: E1104 23:58:22.796445 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:22.796570 kubelet[2839]: E1104 23:58:22.796518 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:22.796882 kubelet[2839]: E1104 23:58:22.796816 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:22.797070 containerd[1636]: time="2025-11-04T23:58:22.796879990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:58:22.798297 kubelet[2839]: E1104 23:58:22.798256 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:58:22.991687 kubelet[2839]: E1104 23:58:22.991615 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:58:23.174380 containerd[1636]: time="2025-11-04T23:58:23.174295694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:23.254190 containerd[1636]: time="2025-11-04T23:58:23.254111394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:58:23.254405 containerd[1636]: time="2025-11-04T23:58:23.254133736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:23.254446 kubelet[2839]: E1104 23:58:23.254403 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:23.254487 kubelet[2839]: E1104 23:58:23.254466 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:23.254668 kubelet[2839]: E1104 23:58:23.254626 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsxpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:23.255923 kubelet[2839]: E1104 23:58:23.255857 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:58:24.542290 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Nov 4 23:58:24.607197 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:24.609557 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:24.624636 systemd-logind[1614]: New session 16 of user core. Nov 4 23:58:24.630203 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:58:24.775137 sshd[5433]: Connection closed by 10.0.0.1 port 38470 Nov 4 23:58:24.775566 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:24.780726 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:38470.service: Deactivated successfully. Nov 4 23:58:24.783578 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:58:24.784600 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:58:24.786605 systemd-logind[1614]: Removed session 16. Nov 4 23:58:26.995847 kubelet[2839]: E1104 23:58:26.995765 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:27.989924 kubelet[2839]: E1104 23:58:27.989851 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:28.997971 kubelet[2839]: E1104 23:58:28.997872 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:58:29.802690 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:38484.service - OpenSSH per-connection server daemon (10.0.0.1:38484). Nov 4 23:58:29.879090 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 38484 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:29.881099 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:29.885793 systemd-logind[1614]: New session 17 of user core. Nov 4 23:58:29.894177 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:58:29.992583 kubelet[2839]: E1104 23:58:29.992364 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:30.425215 sshd[5452]: Connection closed by 10.0.0.1 port 38484 Nov 4 23:58:30.425547 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:30.430323 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:38484.service: Deactivated successfully. Nov 4 23:58:30.432697 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:58:30.433681 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:58:30.435141 systemd-logind[1614]: Removed session 17. Nov 4 23:58:32.990777 kubelet[2839]: E1104 23:58:32.990714 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:33.991651 kubelet[2839]: E1104 23:58:33.991473 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:58:33.992617 containerd[1636]: time="2025-11-04T23:58:33.991638541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:58:34.328068 containerd[1636]: time="2025-11-04T23:58:34.327987714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:34.336060 containerd[1636]: time="2025-11-04T23:58:34.336015399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:58:34.336127 containerd[1636]: time="2025-11-04T23:58:34.336107334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:58:34.336309 kubelet[2839]: E1104 23:58:34.336259 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:34.336376 kubelet[2839]: E1104 23:58:34.336321 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:34.336544 kubelet[2839]: E1104 23:58:34.336489 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c8dc219001274e2abcef068b56e38a59,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:34.338363 containerd[1636]: time="2025-11-04T23:58:34.338336765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:58:34.694719 containerd[1636]: time="2025-11-04T23:58:34.694540437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:34.696387 containerd[1636]: time="2025-11-04T23:58:34.696343574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:58:34.696485 containerd[1636]: time="2025-11-04T23:58:34.696431102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:34.696685 kubelet[2839]: E1104 23:58:34.696593 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:34.696685 kubelet[2839]: E1104 23:58:34.696651 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:34.696860 kubelet[2839]: E1104 23:58:34.696793 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6kc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cbc8bc7c-h2424_calico-system(a0b91814-1cb6-4264-9193-77ae0565f373): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:34.698162 kubelet[2839]: E1104 23:58:34.698095 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:58:35.443056 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:40648.service - OpenSSH per-connection server daemon (10.0.0.1:40648). Nov 4 23:58:35.503871 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 40648 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:35.505384 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:35.509892 systemd-logind[1614]: New session 18 of user core. Nov 4 23:58:35.519119 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:58:35.643978 sshd[5478]: Connection closed by 10.0.0.1 port 40648 Nov 4 23:58:35.644433 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:35.660007 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:40648.service: Deactivated successfully. Nov 4 23:58:35.662186 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:58:35.663165 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:58:35.666380 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). Nov 4 23:58:35.667185 systemd-logind[1614]: Removed session 18. Nov 4 23:58:35.733283 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:35.735252 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:35.740221 systemd-logind[1614]: New session 19 of user core. Nov 4 23:58:35.757344 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:58:36.089113 sshd[5495]: Connection closed by 10.0.0.1 port 40656 Nov 4 23:58:36.089649 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:36.101211 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:40656.service: Deactivated successfully. Nov 4 23:58:36.104230 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:58:36.105426 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:58:36.110103 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). Nov 4 23:58:36.111077 systemd-logind[1614]: Removed session 19. Nov 4 23:58:36.179153 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:36.181330 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:36.186963 systemd-logind[1614]: New session 20 of user core. Nov 4 23:58:36.199345 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 23:58:36.740548 sshd[5510]: Connection closed by 10.0.0.1 port 40666 Nov 4 23:58:36.740843 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:36.753326 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:40666.service: Deactivated successfully. Nov 4 23:58:36.756842 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 23:58:36.758431 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Nov 4 23:58:36.762963 systemd[1]: Started sshd@20-10.0.0.112:22-10.0.0.1:40670.service - OpenSSH per-connection server daemon (10.0.0.1:40670). Nov 4 23:58:36.763616 systemd-logind[1614]: Removed session 20. Nov 4 23:58:36.822765 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 40670 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:36.824898 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:36.831484 systemd-logind[1614]: New session 21 of user core. Nov 4 23:58:36.846289 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 23:58:36.996513 kubelet[2839]: E1104 23:58:36.996312 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:58:37.095250 sshd[5533]: Connection closed by 10.0.0.1 port 40670 Nov 4 23:58:37.096021 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:37.106672 systemd[1]: sshd@20-10.0.0.112:22-10.0.0.1:40670.service: Deactivated successfully. Nov 4 23:58:37.109105 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 23:58:37.110049 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Nov 4 23:58:37.113427 systemd[1]: Started sshd@21-10.0.0.112:22-10.0.0.1:40676.service - OpenSSH per-connection server daemon (10.0.0.1:40676). Nov 4 23:58:37.114474 systemd-logind[1614]: Removed session 21. Nov 4 23:58:37.170493 sshd[5545]: Accepted publickey for core from 10.0.0.1 port 40676 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:37.172604 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:37.177997 systemd-logind[1614]: New session 22 of user core. Nov 4 23:58:37.195587 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 23:58:37.612551 sshd[5548]: Connection closed by 10.0.0.1 port 40676 Nov 4 23:58:37.613003 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:37.618343 systemd[1]: sshd@21-10.0.0.112:22-10.0.0.1:40676.service: Deactivated successfully. Nov 4 23:58:37.620783 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 23:58:37.622118 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Nov 4 23:58:37.623424 systemd-logind[1614]: Removed session 22. Nov 4 23:58:40.991649 containerd[1636]: time="2025-11-04T23:58:40.991572804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:58:41.382834 containerd[1636]: time="2025-11-04T23:58:41.382774212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:41.800136 containerd[1636]: time="2025-11-04T23:58:41.800055176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:58:41.800284 containerd[1636]: time="2025-11-04T23:58:41.800067539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:58:41.800489 kubelet[2839]: E1104 23:58:41.800418 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:41.800986 kubelet[2839]: E1104 23:58:41.800488 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:41.800986 kubelet[2839]: E1104 23:58:41.800665 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:41.802759 containerd[1636]: time="2025-11-04T23:58:41.802713998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:58:41.990539 kubelet[2839]: E1104 23:58:41.990479 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:42.195819 containerd[1636]: time="2025-11-04T23:58:42.195634076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:42.260862 containerd[1636]: time="2025-11-04T23:58:42.260756788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:58:42.260862 containerd[1636]: time="2025-11-04T23:58:42.260826601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:58:42.261195 kubelet[2839]: E1104 23:58:42.261071 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:42.261195 kubelet[2839]: E1104 23:58:42.261128 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:42.261798 containerd[1636]: time="2025-11-04T23:58:42.261734760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:42.261987 kubelet[2839]: E1104 23:58:42.261891 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt84z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7k7w8_calico-system(9096f1c3-7da9-48d9-beff-7b6f2057f511): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:42.263167 kubelet[2839]: E1104 23:58:42.263116 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k7w8" podUID="9096f1c3-7da9-48d9-beff-7b6f2057f511" Nov 4 23:58:42.641620 systemd[1]: Started sshd@22-10.0.0.112:22-10.0.0.1:40690.service - OpenSSH per-connection server daemon (10.0.0.1:40690). Nov 4 23:58:42.671392 containerd[1636]: time="2025-11-04T23:58:42.671152145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:42.713748 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 40690 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:42.716104 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:42.722288 systemd-logind[1614]: New session 23 of user core. Nov 4 23:58:42.727349 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 23:58:42.788217 containerd[1636]: time="2025-11-04T23:58:42.788127065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:42.788438 containerd[1636]: time="2025-11-04T23:58:42.788237354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:42.788747 kubelet[2839]: E1104 23:58:42.788693 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:42.788824 kubelet[2839]: E1104 23:58:42.788770 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:42.789455 kubelet[2839]: E1104 23:58:42.789050 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjpx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-d22xf_calico-apiserver(b35658f8-29c0-438d-8549-d61428e8d39f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:42.790491 kubelet[2839]: E1104 23:58:42.790449 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-d22xf" podUID="b35658f8-29c0-438d-8549-d61428e8d39f" Nov 4 23:58:42.888624 sshd[5566]: Connection closed by 10.0.0.1 port 40690 Nov 4 23:58:42.889137 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:42.895782 systemd[1]: sshd@22-10.0.0.112:22-10.0.0.1:40690.service: Deactivated successfully. Nov 4 23:58:42.898711 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 23:58:42.899667 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Nov 4 23:58:42.901662 systemd-logind[1614]: Removed session 23. Nov 4 23:58:44.989628 kubelet[2839]: E1104 23:58:44.989557 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:44.990924 containerd[1636]: time="2025-11-04T23:58:44.990751223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:45.349481 containerd[1636]: time="2025-11-04T23:58:45.349418784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:45.438480 containerd[1636]: time="2025-11-04T23:58:45.438384301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:45.438629 containerd[1636]: time="2025-11-04T23:58:45.438445318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:45.438817 kubelet[2839]: E1104 23:58:45.438762 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:45.438870 kubelet[2839]: E1104 23:58:45.438825 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:45.439070 kubelet[2839]: E1104 23:58:45.439003 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxbk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d7b6c5897-jq7ws_calico-apiserver(0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:45.440253 kubelet[2839]: E1104 23:58:45.440205 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d7b6c5897-jq7ws" podUID="0bbb99b0-26ba-46ec-81d6-2d0aac8c5b8a" Nov 4 23:58:45.991334 kubelet[2839]: E1104 23:58:45.991121 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-59cbc8bc7c-h2424" podUID="a0b91814-1cb6-4264-9193-77ae0565f373" Nov 4 23:58:46.991615 containerd[1636]: time="2025-11-04T23:58:46.991544400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:58:47.322198 containerd[1636]: time="2025-11-04T23:58:47.322106162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:47.436885 containerd[1636]: time="2025-11-04T23:58:47.436779980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:58:47.436885 containerd[1636]: time="2025-11-04T23:58:47.436853170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:47.437193 kubelet[2839]: E1104 23:58:47.437133 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:47.437605 kubelet[2839]: E1104 23:58:47.437213 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:47.437605 kubelet[2839]: E1104 23:58:47.437409 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78cc59d946-ppm4m_calico-system(adf22170-0a60-4bf0-be14-045d1e27faa2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:47.438874 kubelet[2839]: E1104 23:58:47.438802 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78cc59d946-ppm4m" podUID="adf22170-0a60-4bf0-be14-045d1e27faa2" Nov 4 23:58:47.909815 systemd[1]: Started sshd@23-10.0.0.112:22-10.0.0.1:45362.service - OpenSSH per-connection server daemon (10.0.0.1:45362). Nov 4 23:58:47.982287 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 45362 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:47.985880 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:47.999014 containerd[1636]: time="2025-11-04T23:58:47.994765937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:58:48.000244 systemd-logind[1614]: New session 24 of user core. Nov 4 23:58:48.013431 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 23:58:48.415933 sshd[5587]: Connection closed by 10.0.0.1 port 45362 Nov 4 23:58:48.416357 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:48.421728 systemd[1]: sshd@23-10.0.0.112:22-10.0.0.1:45362.service: Deactivated successfully. Nov 4 23:58:48.424244 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 23:58:48.425197 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Nov 4 23:58:48.427117 systemd-logind[1614]: Removed session 24. Nov 4 23:58:48.434252 containerd[1636]: time="2025-11-04T23:58:48.434155515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:48.520125 containerd[1636]: time="2025-11-04T23:58:48.519922388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:48.520125 containerd[1636]: time="2025-11-04T23:58:48.520061122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:58:48.520543 kubelet[2839]: E1104 23:58:48.520468 2839 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:48.521042 kubelet[2839]: E1104 23:58:48.520550 2839 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:48.521042 kubelet[2839]: E1104 23:58:48.520765 2839 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsxpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-85l6w_calico-system(c5d1d235-24ef-43b3-abad-7fa9db4b88ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:48.522230 kubelet[2839]: E1104 23:58:48.522164 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-85l6w" podUID="c5d1d235-24ef-43b3-abad-7fa9db4b88ef" Nov 4 23:58:51.255088 containerd[1636]: time="2025-11-04T23:58:51.255019752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3079bafe35432eeeaae65dac5726aea4068dee8181911c29c13f5b36ab95455\" id:\"321b2954eee931c3d6bf53c787b4265b3e305975f67b35b6fff69b52f7622778\" pid:5612 exited_at:{seconds:1762300731 nanos:254576008}" Nov 4 23:58:51.258171 kubelet[2839]: E1104 23:58:51.258126 2839 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 23:58:53.431961 systemd[1]: Started sshd@24-10.0.0.112:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Nov 4 23:58:53.491833 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:KxGEO/ncOo8TFEamSIYHMOFsidWuAAXuP+ZJ3EW0aAI Nov 4 23:58:53.493899 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:53.500541 systemd-logind[1614]: New session 25 of user core. Nov 4 23:58:53.507213 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 23:58:53.631225 sshd[5629]: Connection closed by 10.0.0.1 port 36962 Nov 4 23:58:53.631570 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:53.636864 systemd[1]: sshd@24-10.0.0.112:22-10.0.0.1:36962.service: Deactivated successfully. Nov 4 23:58:53.640038 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 23:58:53.641216 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Nov 4 23:58:53.644094 systemd-logind[1614]: Removed session 25.