Oct 24 12:57:52.442215 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 24 10:54:26 -00 2025 Oct 24 12:57:52.442251 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d4ede80e622a50cb26c7ffae9f6398889acdee25a78881d7b1631dd9370bf95 Oct 24 12:57:52.442263 kernel: BIOS-provided physical RAM map: Oct 24 12:57:52.442270 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 24 12:57:52.442277 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 24 12:57:52.442284 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 24 12:57:52.442292 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 24 12:57:52.442299 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 24 12:57:52.442309 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 24 12:57:52.442318 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 24 12:57:52.442325 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 24 12:57:52.442332 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 24 12:57:52.442339 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 24 12:57:52.442346 kernel: NX (Execute Disable) protection: active Oct 24 12:57:52.442356 kernel: APIC: Static calls initialized Oct 24 12:57:52.442364 kernel: SMBIOS 2.8 present. Oct 24 12:57:52.442374 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 24 12:57:52.442382 kernel: DMI: Memory slots populated: 1/1 Oct 24 12:57:52.442389 kernel: Hypervisor detected: KVM Oct 24 12:57:52.442397 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 24 12:57:52.442404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 24 12:57:52.442412 kernel: kvm-clock: using sched offset of 4958936646 cycles Oct 24 12:57:52.442420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 24 12:57:52.442428 kernel: tsc: Detected 2794.750 MHz processor Oct 24 12:57:52.442438 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 24 12:57:52.442447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 24 12:57:52.442455 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 24 12:57:52.442463 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 24 12:57:52.442472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 24 12:57:52.442481 kernel: Using GB pages for direct mapping Oct 24 12:57:52.442491 kernel: ACPI: Early table checksum verification disabled Oct 24 12:57:52.442502 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 24 12:57:52.442510 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442518 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442526 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442534 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 24 12:57:52.442542 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442549 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442559 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442567 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 24 12:57:52.442578 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 24 12:57:52.442586 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 24 12:57:52.442594 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 24 12:57:52.442604 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 24 12:57:52.442612 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 24 12:57:52.442620 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 24 12:57:52.442628 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 24 12:57:52.442636 kernel: No NUMA configuration found Oct 24 12:57:52.442644 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 24 12:57:52.442654 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 24 12:57:52.442662 kernel: Zone ranges: Oct 24 12:57:52.442670 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 24 12:57:52.442678 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 24 12:57:52.442686 kernel: Normal empty Oct 24 12:57:52.442694 kernel: Device empty Oct 24 12:57:52.442702 kernel: Movable zone start for each node Oct 24 12:57:52.442710 kernel: Early memory node ranges Oct 24 12:57:52.442720 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 24 12:57:52.442728 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 24 12:57:52.442736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 24 12:57:52.442744 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 24 12:57:52.442752 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 24 12:57:52.442760 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 24 12:57:52.442771 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 24 12:57:52.442780 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 24 12:57:52.442804 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 24 12:57:52.442812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 24 12:57:52.442833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 24 12:57:52.442841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 24 12:57:52.442850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 24 12:57:52.442858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 24 12:57:52.442866 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 24 12:57:52.442877 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 24 12:57:52.442885 kernel: TSC deadline timer available Oct 24 12:57:52.442893 kernel: CPU topo: Max. logical packages: 1 Oct 24 12:57:52.442901 kernel: CPU topo: Max. logical dies: 1 Oct 24 12:57:52.442908 kernel: CPU topo: Max. dies per package: 1 Oct 24 12:57:52.442916 kernel: CPU topo: Max. threads per core: 1 Oct 24 12:57:52.442924 kernel: CPU topo: Num. cores per package: 4 Oct 24 12:57:52.442934 kernel: CPU topo: Num. threads per package: 4 Oct 24 12:57:52.442942 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 24 12:57:52.442950 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 24 12:57:52.442958 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 24 12:57:52.442966 kernel: kvm-guest: setup PV sched yield Oct 24 12:57:52.442974 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 24 12:57:52.442982 kernel: Booting paravirtualized kernel on KVM Oct 24 12:57:52.442991 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 24 12:57:52.443001 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 24 12:57:52.443009 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 24 12:57:52.443017 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 24 12:57:52.443025 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 24 12:57:52.443033 kernel: kvm-guest: PV spinlocks enabled Oct 24 12:57:52.443041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 24 12:57:52.443050 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d4ede80e622a50cb26c7ffae9f6398889acdee25a78881d7b1631dd9370bf95 Oct 24 12:57:52.443060 kernel: random: crng init done Oct 24 12:57:52.443068 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 24 12:57:52.443077 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 24 12:57:52.443085 kernel: Fallback order for Node 0: 0 Oct 24 12:57:52.443093 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 24 12:57:52.443101 kernel: Policy zone: DMA32 Oct 24 12:57:52.443109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 24 12:57:52.443119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 24 12:57:52.443127 kernel: ftrace: allocating 40092 entries in 157 pages Oct 24 12:57:52.443135 kernel: ftrace: allocated 157 pages with 5 groups Oct 24 12:57:52.443143 kernel: Dynamic Preempt: voluntary Oct 24 12:57:52.443151 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 24 12:57:52.443160 kernel: rcu: RCU event tracing is enabled. Oct 24 12:57:52.443168 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 24 12:57:52.443178 kernel: Trampoline variant of Tasks RCU enabled. Oct 24 12:57:52.443189 kernel: Rude variant of Tasks RCU enabled. Oct 24 12:57:52.443198 kernel: Tracing variant of Tasks RCU enabled. Oct 24 12:57:52.443206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 24 12:57:52.443214 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 24 12:57:52.443222 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 24 12:57:52.443230 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 24 12:57:52.443248 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 24 12:57:52.443256 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 24 12:57:52.443265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 24 12:57:52.443279 kernel: Console: colour VGA+ 80x25 Oct 24 12:57:52.443290 kernel: printk: legacy console [ttyS0] enabled Oct 24 12:57:52.443298 kernel: ACPI: Core revision 20240827 Oct 24 12:57:52.443307 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 24 12:57:52.443315 kernel: APIC: Switch to symmetric I/O mode setup Oct 24 12:57:52.443324 kernel: x2apic enabled Oct 24 12:57:52.443332 kernel: APIC: Switched APIC routing to: physical x2apic Oct 24 12:57:52.443345 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 24 12:57:52.443354 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 24 12:57:52.443362 kernel: kvm-guest: setup PV IPIs Oct 24 12:57:52.443373 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 24 12:57:52.443381 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 24 12:57:52.443390 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 24 12:57:52.443398 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 24 12:57:52.443407 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 24 12:57:52.443415 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 24 12:57:52.443423 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 24 12:57:52.443434 kernel: Spectre V2 : Mitigation: Retpolines Oct 24 12:57:52.443442 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 24 12:57:52.443451 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 24 12:57:52.443459 kernel: active return thunk: retbleed_return_thunk Oct 24 12:57:52.443468 kernel: RETBleed: Mitigation: untrained return thunk Oct 24 12:57:52.443476 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 24 12:57:52.443485 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 24 12:57:52.443495 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 24 12:57:52.443504 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 24 12:57:52.443513 kernel: active return thunk: srso_return_thunk Oct 24 12:57:52.443521 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 24 12:57:52.443529 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 24 12:57:52.443538 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 24 12:57:52.443546 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 24 12:57:52.443556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 24 12:57:52.443565 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 24 12:57:52.443573 kernel: Freeing SMP alternatives memory: 32K Oct 24 12:57:52.443581 kernel: pid_max: default: 32768 minimum: 301 Oct 24 12:57:52.443590 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 24 12:57:52.443598 kernel: landlock: Up and running. Oct 24 12:57:52.443606 kernel: SELinux: Initializing. Oct 24 12:57:52.443620 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 24 12:57:52.443628 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 24 12:57:52.443637 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 24 12:57:52.443645 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 24 12:57:52.443654 kernel: ... version: 0 Oct 24 12:57:52.443662 kernel: ... bit width: 48 Oct 24 12:57:52.443670 kernel: ... generic registers: 6 Oct 24 12:57:52.443681 kernel: ... value mask: 0000ffffffffffff Oct 24 12:57:52.443689 kernel: ... max period: 00007fffffffffff Oct 24 12:57:52.443697 kernel: ... fixed-purpose events: 0 Oct 24 12:57:52.443705 kernel: ... event mask: 000000000000003f Oct 24 12:57:52.443714 kernel: signal: max sigframe size: 1776 Oct 24 12:57:52.443722 kernel: rcu: Hierarchical SRCU implementation. Oct 24 12:57:52.443731 kernel: rcu: Max phase no-delay instances is 400. Oct 24 12:57:52.443739 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 24 12:57:52.443749 kernel: smp: Bringing up secondary CPUs ... Oct 24 12:57:52.443758 kernel: smpboot: x86: Booting SMP configuration: Oct 24 12:57:52.443766 kernel: .... node #0, CPUs: #1 #2 #3 Oct 24 12:57:52.443774 kernel: smp: Brought up 1 node, 4 CPUs Oct 24 12:57:52.443798 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 24 12:57:52.443807 kernel: Memory: 2451436K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114376K reserved, 0K cma-reserved) Oct 24 12:57:52.443815 kernel: devtmpfs: initialized Oct 24 12:57:52.443826 kernel: x86/mm: Memory block size: 128MB Oct 24 12:57:52.443835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 24 12:57:52.443843 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 24 12:57:52.443851 kernel: pinctrl core: initialized pinctrl subsystem Oct 24 12:57:52.443860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 24 12:57:52.443868 kernel: audit: initializing netlink subsys (disabled) Oct 24 12:57:52.443877 kernel: audit: type=2000 audit(1761310669.263:1): state=initialized audit_enabled=0 res=1 Oct 24 12:57:52.443887 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 24 12:57:52.443895 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 24 12:57:52.443903 kernel: cpuidle: using governor menu Oct 24 12:57:52.443912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 24 12:57:52.443920 kernel: dca service started, version 1.12.1 Oct 24 12:57:52.443928 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 24 12:57:52.443937 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 24 12:57:52.443947 kernel: PCI: Using configuration type 1 for base access Oct 24 12:57:52.443956 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 24 12:57:52.443964 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 24 12:57:52.443972 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 24 12:57:52.443981 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 24 12:57:52.443989 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 24 12:57:52.443997 kernel: ACPI: Added _OSI(Module Device) Oct 24 12:57:52.444008 kernel: ACPI: Added _OSI(Processor Device) Oct 24 12:57:52.444016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 24 12:57:52.444024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 24 12:57:52.444033 kernel: ACPI: Interpreter enabled Oct 24 12:57:52.444041 kernel: ACPI: PM: (supports S0 S3 S5) Oct 24 12:57:52.444049 kernel: ACPI: Using IOAPIC for interrupt routing Oct 24 12:57:52.444058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 24 12:57:52.444069 kernel: PCI: Using E820 reservations for host bridge windows Oct 24 12:57:52.444077 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 24 12:57:52.444085 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 24 12:57:52.444337 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 24 12:57:52.444524 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 24 12:57:52.444701 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 24 12:57:52.444716 kernel: PCI host bridge to bus 0000:00 Oct 24 12:57:52.444907 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 24 12:57:52.445070 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 24 12:57:52.445229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 24 12:57:52.445400 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 24 12:57:52.445559 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 24 12:57:52.445725 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 24 12:57:52.445904 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 24 12:57:52.446096 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 24 12:57:52.446291 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 24 12:57:52.446464 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 24 12:57:52.446659 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 24 12:57:52.446847 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 24 12:57:52.447022 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 24 12:57:52.447207 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 24 12:57:52.447392 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 24 12:57:52.447573 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 24 12:57:52.447751 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 24 12:57:52.447957 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 24 12:57:52.448132 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 24 12:57:52.448316 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 24 12:57:52.448491 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 24 12:57:52.448673 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 24 12:57:52.448870 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 24 12:57:52.449044 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 24 12:57:52.449215 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 24 12:57:52.449401 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 24 12:57:52.449583 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 24 12:57:52.449761 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 24 12:57:52.449959 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 24 12:57:52.450160 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 24 12:57:52.450358 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 24 12:57:52.450827 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 24 12:57:52.451135 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 24 12:57:52.451174 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 24 12:57:52.451195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 24 12:57:52.451206 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 24 12:57:52.451215 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 24 12:57:52.451223 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 24 12:57:52.451234 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 24 12:57:52.451263 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 24 12:57:52.451273 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 24 12:57:52.451281 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 24 12:57:52.451290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 24 12:57:52.451299 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 24 12:57:52.451318 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 24 12:57:52.451343 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 24 12:57:52.451364 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 24 12:57:52.451389 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 24 12:57:52.451407 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 24 12:57:52.451418 kernel: iommu: Default domain type: Translated Oct 24 12:57:52.451435 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 24 12:57:52.451444 kernel: PCI: Using ACPI for IRQ routing Oct 24 12:57:52.451453 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 24 12:57:52.451462 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 24 12:57:52.451489 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 24 12:57:52.451708 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 24 12:57:52.452032 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 24 12:57:52.452269 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 24 12:57:52.452282 kernel: vgaarb: loaded Oct 24 12:57:52.452308 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 24 12:57:52.452342 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 24 12:57:52.452360 kernel: clocksource: Switched to clocksource kvm-clock Oct 24 12:57:52.452377 kernel: VFS: Disk quotas dquot_6.6.0 Oct 24 12:57:52.452396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 24 12:57:52.452424 kernel: pnp: PnP ACPI init Oct 24 12:57:52.452860 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 24 12:57:52.452892 kernel: pnp: PnP ACPI: found 6 devices Oct 24 12:57:52.452908 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 24 12:57:52.452932 kernel: NET: Registered PF_INET protocol family Oct 24 12:57:52.452942 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 24 12:57:52.452950 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 24 12:57:52.452959 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 24 12:57:52.452978 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 24 12:57:52.453008 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 24 12:57:52.453041 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 24 12:57:52.453074 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 24 12:57:52.453087 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 24 12:57:52.453095 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 24 12:57:52.453104 kernel: NET: Registered PF_XDP protocol family Oct 24 12:57:52.453411 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 24 12:57:52.453747 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 24 12:57:52.453999 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 24 12:57:52.454302 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 24 12:57:52.454506 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 24 12:57:52.454812 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 24 12:57:52.454826 kernel: PCI: CLS 0 bytes, default 64 Oct 24 12:57:52.454835 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 24 12:57:52.454847 kernel: Initialise system trusted keyrings Oct 24 12:57:52.454872 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 24 12:57:52.454915 kernel: Key type asymmetric registered Oct 24 12:57:52.454925 kernel: Asymmetric key parser 'x509' registered Oct 24 12:57:52.454978 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 24 12:57:52.454990 kernel: io scheduler mq-deadline registered Oct 24 12:57:52.454999 kernel: io scheduler kyber registered Oct 24 12:57:52.455015 kernel: io scheduler bfq registered Oct 24 12:57:52.455042 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 24 12:57:52.455058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 24 12:57:52.455075 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 24 12:57:52.455084 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 24 12:57:52.455099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 24 12:57:52.455117 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 24 12:57:52.455129 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 24 12:57:52.455141 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 24 12:57:52.455162 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 24 12:57:52.455179 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 24 12:57:52.455412 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 24 12:57:52.455668 kernel: rtc_cmos 00:04: registered as rtc0 Oct 24 12:57:52.455925 kernel: rtc_cmos 00:04: setting system clock to 2025-10-24T12:57:50 UTC (1761310670) Oct 24 12:57:52.456109 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 24 12:57:52.456138 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 24 12:57:52.456159 kernel: NET: Registered PF_INET6 protocol family Oct 24 12:57:52.456181 kernel: Segment Routing with IPv6 Oct 24 12:57:52.456203 kernel: In-situ OAM (IOAM) with IPv6 Oct 24 12:57:52.456223 kernel: NET: Registered PF_PACKET protocol family Oct 24 12:57:52.456251 kernel: Key type dns_resolver registered Oct 24 12:57:52.456277 kernel: IPI shorthand broadcast: enabled Oct 24 12:57:52.456828 kernel: sched_clock: Marking stable (1282002144, 229428559)->(1788906900, -277476197) Oct 24 12:57:52.456838 kernel: registered taskstats version 1 Oct 24 12:57:52.456859 kernel: Loading compiled-in X.509 certificates Oct 24 12:57:52.456868 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: aff9c7598d1416d3f3fe3d525df2bd31fdcc757d' Oct 24 12:57:52.456876 kernel: Demotion targets for Node 0: null Oct 24 12:57:52.456885 kernel: Key type .fscrypt registered Oct 24 12:57:52.456908 kernel: Key type fscrypt-provisioning registered Oct 24 12:57:52.456930 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 24 12:57:52.456947 kernel: ima: Allocated hash algorithm: sha1 Oct 24 12:57:52.456966 kernel: ima: No architecture policies found Oct 24 12:57:52.456976 kernel: clk: Disabling unused clocks Oct 24 12:57:52.456995 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 24 12:57:52.457005 kernel: Write protecting the kernel read-only data: 40960k Oct 24 12:57:52.457019 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 24 12:57:52.457036 kernel: Run /init as init process Oct 24 12:57:52.457055 kernel: with arguments: Oct 24 12:57:52.457065 kernel: /init Oct 24 12:57:52.457073 kernel: with environment: Oct 24 12:57:52.457081 kernel: HOME=/ Oct 24 12:57:52.457090 kernel: TERM=linux Oct 24 12:57:52.457098 kernel: SCSI subsystem initialized Oct 24 12:57:52.457110 kernel: libata version 3.00 loaded. Oct 24 12:57:52.457470 kernel: ahci 0000:00:1f.2: version 3.0 Oct 24 12:57:52.457519 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 24 12:57:52.457765 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 24 12:57:52.458074 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 24 12:57:52.458423 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 24 12:57:52.458630 kernel: scsi host0: ahci Oct 24 12:57:52.458843 kernel: scsi host1: ahci Oct 24 12:57:52.459031 kernel: scsi host2: ahci Oct 24 12:57:52.459276 kernel: scsi host3: ahci Oct 24 12:57:52.459546 kernel: scsi host4: ahci Oct 24 12:57:52.459746 kernel: scsi host5: ahci Oct 24 12:57:52.459764 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 24 12:57:52.459773 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 24 12:57:52.459796 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 24 12:57:52.459805 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 24 12:57:52.459814 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 24 12:57:52.459826 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 24 12:57:52.459835 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 24 12:57:52.459844 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 24 12:57:52.459853 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 24 12:57:52.459862 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 24 12:57:52.459871 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 24 12:57:52.459880 kernel: ata3.00: LPM support broken, forcing max_power Oct 24 12:57:52.459900 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 24 12:57:52.459910 kernel: ata3.00: applying bridge limits Oct 24 12:57:52.459921 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 24 12:57:52.459937 kernel: ata3.00: LPM support broken, forcing max_power Oct 24 12:57:52.459946 kernel: ata3.00: configured for UDMA/100 Oct 24 12:57:52.460281 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 24 12:57:52.460603 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 24 12:57:52.460903 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 24 12:57:52.460937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 24 12:57:52.460949 kernel: GPT:16515071 != 27000831 Oct 24 12:57:52.460958 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 24 12:57:52.460975 kernel: GPT:16515071 != 27000831 Oct 24 12:57:52.460983 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 24 12:57:52.460996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 24 12:57:52.461008 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461217 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 24 12:57:52.461231 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 24 12:57:52.461497 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 24 12:57:52.461528 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 24 12:57:52.461551 kernel: device-mapper: uevent: version 1.0.3 Oct 24 12:57:52.461618 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 24 12:57:52.461732 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 24 12:57:52.461748 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461759 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461775 kernel: raid6: avx2x4 gen() 23207 MB/s Oct 24 12:57:52.461805 kernel: raid6: avx2x2 gen() 20308 MB/s Oct 24 12:57:52.461814 kernel: raid6: avx2x1 gen() 16877 MB/s Oct 24 12:57:52.461825 kernel: raid6: using algorithm avx2x4 gen() 23207 MB/s Oct 24 12:57:52.461834 kernel: raid6: .... xor() 5261 MB/s, rmw enabled Oct 24 12:57:52.461843 kernel: raid6: using avx2x2 recovery algorithm Oct 24 12:57:52.461852 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461861 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461871 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461880 kernel: xor: automatically using best checksumming function avx Oct 24 12:57:52.461889 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461898 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 24 12:57:52.461907 kernel: BTRFS: device fsid 57233a56-c6f9-4544-8585-d96903c57cd2 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (176) Oct 24 12:57:52.461916 kernel: BTRFS info (device dm-0): first mount of filesystem 57233a56-c6f9-4544-8585-d96903c57cd2 Oct 24 12:57:52.461925 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 24 12:57:52.461934 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 24 12:57:52.461945 kernel: BTRFS info (device dm-0): enabling free space tree Oct 24 12:57:52.461955 kernel: Invalid ELF header magic: != \u007fELF Oct 24 12:57:52.461963 kernel: loop: module loaded Oct 24 12:57:52.461972 kernel: loop0: detected capacity change from 0 to 100120 Oct 24 12:57:52.461980 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 24 12:57:52.461991 systemd[1]: Successfully made /usr/ read-only. Oct 24 12:57:52.462005 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 24 12:57:52.462015 systemd[1]: Detected virtualization kvm. Oct 24 12:57:52.462024 systemd[1]: Detected architecture x86-64. Oct 24 12:57:52.462033 systemd[1]: Running in initrd. Oct 24 12:57:52.462043 systemd[1]: No hostname configured, using default hostname. Oct 24 12:57:52.462052 systemd[1]: Hostname set to . Oct 24 12:57:52.462064 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 24 12:57:52.462073 systemd[1]: Queued start job for default target initrd.target. Oct 24 12:57:52.462083 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 24 12:57:52.462092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 24 12:57:52.462102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 24 12:57:52.462111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 24 12:57:52.462121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 24 12:57:52.462133 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 24 12:57:52.462143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 24 12:57:52.462153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 24 12:57:52.462162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 24 12:57:52.462171 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 24 12:57:52.462183 systemd[1]: Reached target paths.target - Path Units. Oct 24 12:57:52.462192 systemd[1]: Reached target slices.target - Slice Units. Oct 24 12:57:52.462201 systemd[1]: Reached target swap.target - Swaps. Oct 24 12:57:52.462210 systemd[1]: Reached target timers.target - Timer Units. Oct 24 12:57:52.462220 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 24 12:57:52.462230 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 24 12:57:52.462248 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 24 12:57:52.462260 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 24 12:57:52.462269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 24 12:57:52.462280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 24 12:57:52.462289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 24 12:57:52.462298 systemd[1]: Reached target sockets.target - Socket Units. Oct 24 12:57:52.462308 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 24 12:57:52.462317 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 24 12:57:52.462329 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 24 12:57:52.462338 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 24 12:57:52.462348 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 24 12:57:52.462357 systemd[1]: Starting systemd-fsck-usr.service... Oct 24 12:57:52.462367 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 24 12:57:52.462376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 24 12:57:52.462389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 24 12:57:52.462401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 24 12:57:52.462411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 24 12:57:52.462420 systemd[1]: Finished systemd-fsck-usr.service. Oct 24 12:57:52.462480 systemd-journald[311]: Collecting audit messages is disabled. Oct 24 12:57:52.462507 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 24 12:57:52.462529 systemd-journald[311]: Journal started Oct 24 12:57:52.462552 systemd-journald[311]: Runtime Journal (/run/log/journal/58f51b0cea2b493a9546c6fd21523ee1) is 6M, max 48.3M, 42.2M free. Oct 24 12:57:52.565260 systemd[1]: Started systemd-journald.service - Journal Service. Oct 24 12:57:52.569584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 24 12:57:52.572334 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 24 12:57:52.580661 systemd-modules-load[313]: Inserted module 'br_netfilter' Oct 24 12:57:52.581549 kernel: Bridge firewalling registered Oct 24 12:57:52.582082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 24 12:57:52.586914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 24 12:57:52.591003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 24 12:57:52.596424 systemd-tmpfiles[327]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 24 12:57:52.670063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 24 12:57:52.673937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 24 12:57:52.680870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 24 12:57:52.684900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 24 12:57:52.732995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 24 12:57:52.739490 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 24 12:57:52.760065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 24 12:57:52.761163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 24 12:57:52.766695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 24 12:57:52.797260 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d4ede80e622a50cb26c7ffae9f6398889acdee25a78881d7b1631dd9370bf95 Oct 24 12:57:52.819865 systemd-resolved[342]: Positive Trust Anchors: Oct 24 12:57:52.819878 systemd-resolved[342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 24 12:57:52.819883 systemd-resolved[342]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 24 12:57:52.819914 systemd-resolved[342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 24 12:57:52.847956 systemd-resolved[342]: Defaulting to hostname 'linux'. Oct 24 12:57:52.849704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 24 12:57:52.851839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 24 12:57:52.939818 kernel: Loading iSCSI transport class v2.0-870. Oct 24 12:57:52.954821 kernel: iscsi: registered transport (tcp) Oct 24 12:57:53.051373 kernel: iscsi: registered transport (qla4xxx) Oct 24 12:57:53.051444 kernel: QLogic iSCSI HBA Driver Oct 24 12:57:53.083101 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 24 12:57:53.109903 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 24 12:57:53.111909 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 24 12:57:53.179473 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 24 12:57:53.200830 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 24 12:57:53.202238 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 24 12:57:53.250386 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 24 12:57:53.254548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 24 12:57:53.284769 systemd-udevd[594]: Using default interface naming scheme 'v257'. Oct 24 12:57:53.348642 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 24 12:57:53.369353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 24 12:57:53.374957 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 24 12:57:53.379341 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 24 12:57:53.404419 dracut-pre-trigger[693]: rd.md=0: removing MD RAID activation Oct 24 12:57:53.431300 systemd-networkd[695]: lo: Link UP Oct 24 12:57:53.431308 systemd-networkd[695]: lo: Gained carrier Oct 24 12:57:53.432099 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 24 12:57:53.433415 systemd[1]: Reached target network.target - Network. Oct 24 12:57:53.452490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 24 12:57:53.455176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 24 12:57:53.578971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 24 12:57:53.582491 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 24 12:57:53.681474 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 24 12:57:53.696199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 24 12:57:53.726321 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 24 12:57:53.775979 kernel: cryptd: max_cpu_qlen set to 1000 Oct 24 12:57:53.776021 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 24 12:57:53.780084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 24 12:57:53.786063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 24 12:57:53.792505 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 24 12:57:53.793235 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 24 12:57:53.794204 systemd-networkd[695]: eth0: Link UP Oct 24 12:57:53.794428 systemd-networkd[695]: eth0: Gained carrier Oct 24 12:57:53.794439 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 24 12:57:53.805253 kernel: AES CTR mode by8 optimization enabled Oct 24 12:57:53.813477 systemd-networkd[695]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 24 12:57:53.816252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 24 12:57:53.816384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 24 12:57:53.821383 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 24 12:57:53.824632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 24 12:57:53.924392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 24 12:57:53.927234 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 24 12:57:53.934169 disk-uuid[816]: Primary Header is updated. Oct 24 12:57:53.934169 disk-uuid[816]: Secondary Entries is updated. Oct 24 12:57:53.934169 disk-uuid[816]: Secondary Header is updated. Oct 24 12:57:53.931468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 24 12:57:53.934981 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 24 12:57:53.938961 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 24 12:57:53.948640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 24 12:57:54.008851 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 24 12:57:54.152023 systemd-resolved[342]: Detected conflict on linux IN A 10.0.0.145 Oct 24 12:57:54.152044 systemd-resolved[342]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 24 12:57:54.984851 disk-uuid[837]: Warning: The kernel is still using the old partition table. Oct 24 12:57:54.984851 disk-uuid[837]: The new table will be used at the next reboot or after you Oct 24 12:57:54.984851 disk-uuid[837]: run partprobe(8) or kpartx(8) Oct 24 12:57:54.984851 disk-uuid[837]: The operation has completed successfully. Oct 24 12:57:54.996468 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 24 12:57:54.996630 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 24 12:57:55.001337 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 24 12:57:55.038823 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (858) Oct 24 12:57:55.038882 kernel: BTRFS info (device vda6): first mount of filesystem 74b228b6-9a0a-44a5-83d5-a4ca18adefe5 Oct 24 12:57:55.041464 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 24 12:57:55.045193 kernel: BTRFS info (device vda6): turning on async discard Oct 24 12:57:55.045225 kernel: BTRFS info (device vda6): enabling free space tree Oct 24 12:57:55.053815 kernel: BTRFS info (device vda6): last unmount of filesystem 74b228b6-9a0a-44a5-83d5-a4ca18adefe5 Oct 24 12:57:55.055088 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 24 12:57:55.057942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 24 12:57:55.413752 ignition[877]: Ignition 2.22.0 Oct 24 12:57:55.413771 ignition[877]: Stage: fetch-offline Oct 24 12:57:55.413854 ignition[877]: no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:55.413872 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:55.414085 ignition[877]: parsed url from cmdline: "" Oct 24 12:57:55.414090 ignition[877]: no config URL provided Oct 24 12:57:55.414096 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Oct 24 12:57:55.414109 ignition[877]: no config at "/usr/lib/ignition/user.ign" Oct 24 12:57:55.414156 ignition[877]: op(1): [started] loading QEMU firmware config module Oct 24 12:57:55.414161 ignition[877]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 24 12:57:55.427587 ignition[877]: op(1): [finished] loading QEMU firmware config module Oct 24 12:57:55.509881 ignition[877]: parsing config with SHA512: 0bb8683b20f0641717d225bb5815e5e35ca95c66e59faec2bf07a8e369ea323a296d43416a2b1c7037bc74866a96fd26b0474fc805141bfb9c4d2d14f634fe8d Oct 24 12:57:55.552263 systemd-networkd[695]: eth0: Gained IPv6LL Oct 24 12:57:55.558963 unknown[877]: fetched base config from "system" Oct 24 12:57:55.558980 unknown[877]: fetched user config from "qemu" Oct 24 12:57:55.559598 ignition[877]: fetch-offline: fetch-offline passed Oct 24 12:57:55.559730 ignition[877]: Ignition finished successfully Oct 24 12:57:55.562767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 24 12:57:55.565319 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 24 12:57:55.566305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 24 12:57:55.614600 ignition[887]: Ignition 2.22.0 Oct 24 12:57:55.614614 ignition[887]: Stage: kargs Oct 24 12:57:55.614801 ignition[887]: no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:55.614829 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:55.615665 ignition[887]: kargs: kargs passed Oct 24 12:57:55.621488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 24 12:57:55.615713 ignition[887]: Ignition finished successfully Oct 24 12:57:55.624505 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 24 12:57:55.663206 ignition[895]: Ignition 2.22.0 Oct 24 12:57:55.663220 ignition[895]: Stage: disks Oct 24 12:57:55.663382 ignition[895]: no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:55.663393 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:55.667175 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 24 12:57:55.664131 ignition[895]: disks: disks passed Oct 24 12:57:55.669743 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 24 12:57:55.664180 ignition[895]: Ignition finished successfully Oct 24 12:57:55.672655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 24 12:57:55.675431 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 24 12:57:55.679379 systemd[1]: Reached target sysinit.target - System Initialization. Oct 24 12:57:55.682112 systemd[1]: Reached target basic.target - Basic System. Oct 24 12:57:55.689467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 24 12:57:55.727400 systemd-fsck[905]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 24 12:57:55.735650 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 24 12:57:55.740639 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 24 12:57:55.859812 kernel: EXT4-fs (vda9): mounted filesystem 0307972d-5049-4caa-aa9e-5afae6c45319 r/w with ordered data mode. Quota mode: none. Oct 24 12:57:55.859997 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 24 12:57:55.861226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 24 12:57:55.864697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 24 12:57:55.867584 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 24 12:57:55.871865 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 24 12:57:55.871908 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 24 12:57:55.871933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 24 12:57:55.886598 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 24 12:57:55.889642 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 24 12:57:55.895065 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Oct 24 12:57:55.898408 kernel: BTRFS info (device vda6): first mount of filesystem 74b228b6-9a0a-44a5-83d5-a4ca18adefe5 Oct 24 12:57:55.898444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 24 12:57:55.902211 kernel: BTRFS info (device vda6): turning on async discard Oct 24 12:57:55.902236 kernel: BTRFS info (device vda6): enabling free space tree Oct 24 12:57:55.903581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 24 12:57:55.959953 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Oct 24 12:57:55.967521 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Oct 24 12:57:55.974436 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Oct 24 12:57:55.980717 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Oct 24 12:57:56.098717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 24 12:57:56.102500 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 24 12:57:56.104646 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 24 12:57:56.132204 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 24 12:57:56.135112 kernel: BTRFS info (device vda6): last unmount of filesystem 74b228b6-9a0a-44a5-83d5-a4ca18adefe5 Oct 24 12:57:56.166056 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 24 12:57:56.183716 ignition[1028]: INFO : Ignition 2.22.0 Oct 24 12:57:56.183716 ignition[1028]: INFO : Stage: mount Oct 24 12:57:56.186920 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:56.186920 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:56.186920 ignition[1028]: INFO : mount: mount passed Oct 24 12:57:56.186920 ignition[1028]: INFO : Ignition finished successfully Oct 24 12:57:56.196838 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 24 12:57:56.200710 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 24 12:57:56.228701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 24 12:57:56.252870 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1040) Oct 24 12:57:56.256143 kernel: BTRFS info (device vda6): first mount of filesystem 74b228b6-9a0a-44a5-83d5-a4ca18adefe5 Oct 24 12:57:56.256170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 24 12:57:56.260065 kernel: BTRFS info (device vda6): turning on async discard Oct 24 12:57:56.260119 kernel: BTRFS info (device vda6): enabling free space tree Oct 24 12:57:56.261974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 24 12:57:56.300909 ignition[1057]: INFO : Ignition 2.22.0 Oct 24 12:57:56.300909 ignition[1057]: INFO : Stage: files Oct 24 12:57:56.303930 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:56.303930 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:56.303930 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Oct 24 12:57:56.303930 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 24 12:57:56.303930 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 24 12:57:56.314224 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 24 12:57:56.316446 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 24 12:57:56.319391 unknown[1057]: wrote ssh authorized keys file for user: core Oct 24 12:57:56.321415 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 24 12:57:56.325337 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 24 12:57:56.328526 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 24 12:57:56.368993 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 24 12:57:56.434606 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 24 12:57:56.434606 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 24 12:57:56.441271 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 24 12:57:56.470946 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 24 12:57:56.470946 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 24 12:57:56.470946 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 24 12:57:56.724890 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 24 12:57:57.149918 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 24 12:57:57.149918 ignition[1057]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 24 12:57:57.155801 ignition[1057]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 24 12:57:57.184206 ignition[1057]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 24 12:57:57.192652 ignition[1057]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 24 12:57:57.195357 ignition[1057]: INFO : files: files passed Oct 24 12:57:57.195357 ignition[1057]: INFO : Ignition finished successfully Oct 24 12:57:57.201426 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 24 12:57:57.204405 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 24 12:57:57.207906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 24 12:57:57.232888 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 24 12:57:57.233020 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 24 12:57:57.237801 initrd-setup-root-after-ignition[1089]: grep: /sysroot/oem/oem-release: No such file or directory Oct 24 12:57:57.241720 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 24 12:57:57.241720 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 24 12:57:57.245089 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 24 12:57:57.244895 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 24 12:57:57.245543 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 24 12:57:57.254912 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 24 12:57:57.311733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 24 12:57:57.311882 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 24 12:57:57.312998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 24 12:57:57.317716 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 24 12:57:57.323483 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 24 12:57:57.326146 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 24 12:57:57.370702 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 24 12:57:57.373100 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 24 12:57:57.400957 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 24 12:57:57.401173 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 24 12:57:57.402530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 24 12:57:57.403374 systemd[1]: Stopped target timers.target - Timer Units. Oct 24 12:57:57.413021 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 24 12:57:57.413256 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 24 12:57:57.418482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 24 12:57:57.419331 systemd[1]: Stopped target basic.target - Basic System. Oct 24 12:57:57.422878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 24 12:57:57.426306 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 24 12:57:57.429851 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 24 12:57:57.433991 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 24 12:57:57.434546 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 24 12:57:57.435366 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 24 12:57:57.435943 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 24 12:57:57.436753 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 24 12:57:57.437570 systemd[1]: Stopped target swap.target - Swaps. Oct 24 12:57:57.438360 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 24 12:57:57.438566 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 24 12:57:57.457939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 24 12:57:57.459264 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 24 12:57:57.466565 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 24 12:57:57.466726 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 24 12:57:57.467708 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 24 12:57:57.467919 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 24 12:57:57.474814 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 24 12:57:57.474941 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 24 12:57:57.478766 systemd[1]: Stopped target paths.target - Path Units. Oct 24 12:57:57.479583 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 24 12:57:57.486952 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 24 12:57:57.487868 systemd[1]: Stopped target slices.target - Slice Units. Oct 24 12:57:57.492705 systemd[1]: Stopped target sockets.target - Socket Units. Oct 24 12:57:57.493638 systemd[1]: iscsid.socket: Deactivated successfully. Oct 24 12:57:57.493775 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 24 12:57:57.499615 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 24 12:57:57.499728 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 24 12:57:57.500531 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 24 12:57:57.500695 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 24 12:57:57.507278 systemd[1]: ignition-files.service: Deactivated successfully. Oct 24 12:57:57.507420 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 24 12:57:57.515386 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 24 12:57:57.516361 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 24 12:57:57.516522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 24 12:57:57.520396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 24 12:57:57.522469 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 24 12:57:57.522636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 24 12:57:57.523392 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 24 12:57:57.523523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 24 12:57:57.528905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 24 12:57:57.529047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 24 12:57:57.543528 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 24 12:57:57.545987 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 24 12:57:57.572655 ignition[1116]: INFO : Ignition 2.22.0 Oct 24 12:57:57.572655 ignition[1116]: INFO : Stage: umount Oct 24 12:57:57.576870 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 24 12:57:57.576870 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 24 12:57:57.576870 ignition[1116]: INFO : umount: umount passed Oct 24 12:57:57.576870 ignition[1116]: INFO : Ignition finished successfully Oct 24 12:57:57.576027 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 24 12:57:57.576812 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 24 12:57:57.576968 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 24 12:57:57.578535 systemd[1]: Stopped target network.target - Network. Oct 24 12:57:57.584556 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 24 12:57:57.584655 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 24 12:57:57.585509 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 24 12:57:57.585565 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 24 12:57:57.586330 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 24 12:57:57.586396 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 24 12:57:57.593281 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 24 12:57:57.593341 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 24 12:57:57.594319 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 24 12:57:57.600305 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 24 12:57:57.618340 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 24 12:57:57.618607 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 24 12:57:57.627351 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 24 12:57:57.627527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 24 12:57:57.634334 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 24 12:57:57.637874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 24 12:57:57.637984 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 24 12:57:57.641254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 24 12:57:57.647170 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 24 12:57:57.647302 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 24 12:57:57.653011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 24 12:57:57.653095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 24 12:57:57.656237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 24 12:57:57.656312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 24 12:57:57.657430 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 24 12:57:57.660367 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 24 12:57:57.673085 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 24 12:57:57.674530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 24 12:57:57.674681 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 24 12:57:57.683480 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 24 12:57:57.683679 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 24 12:57:57.685350 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 24 12:57:57.685455 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 24 12:57:57.689913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 24 12:57:57.690005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 24 12:57:57.690452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 24 12:57:57.690506 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 24 12:57:57.699262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 24 12:57:57.699324 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 24 12:57:57.702865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 24 12:57:57.702930 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 24 12:57:57.711472 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 24 12:57:57.712193 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 24 12:57:57.712250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 24 12:57:57.715417 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 24 12:57:57.715489 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 24 12:57:57.716289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 24 12:57:57.716375 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 24 12:57:57.747596 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 24 12:57:57.747750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 24 12:57:57.750590 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 24 12:57:57.750735 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 24 12:57:57.755240 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 24 12:57:57.758708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 24 12:57:57.778827 systemd[1]: Switching root. Oct 24 12:57:57.812299 systemd-journald[311]: Journal stopped Oct 24 12:57:59.764571 systemd-journald[311]: Received SIGTERM from PID 1 (systemd). Oct 24 12:57:59.764644 kernel: SELinux: policy capability network_peer_controls=1 Oct 24 12:57:59.764659 kernel: SELinux: policy capability open_perms=1 Oct 24 12:57:59.764675 kernel: SELinux: policy capability extended_socket_class=1 Oct 24 12:57:59.764696 kernel: SELinux: policy capability always_check_network=0 Oct 24 12:57:59.764708 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 24 12:57:59.764721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 24 12:57:59.764738 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 24 12:57:59.764750 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 24 12:57:59.764762 kernel: SELinux: policy capability userspace_initial_context=0 Oct 24 12:57:59.764801 kernel: audit: type=1403 audit(1761310678.658:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 24 12:57:59.764821 systemd[1]: Successfully loaded SELinux policy in 69.505ms. Oct 24 12:57:59.764842 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.860ms. Oct 24 12:57:59.764856 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 24 12:57:59.764869 systemd[1]: Detected virtualization kvm. Oct 24 12:57:59.764882 systemd[1]: Detected architecture x86-64. Oct 24 12:57:59.764894 systemd[1]: Detected first boot. Oct 24 12:57:59.764915 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 24 12:57:59.764929 zram_generator::config[1161]: No configuration found. Oct 24 12:57:59.764948 kernel: Guest personality initialized and is inactive Oct 24 12:57:59.764961 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 24 12:57:59.764973 kernel: Initialized host personality Oct 24 12:57:59.764985 kernel: NET: Registered PF_VSOCK protocol family Oct 24 12:57:59.764997 systemd[1]: Populated /etc with preset unit settings. Oct 24 12:57:59.765018 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 24 12:57:59.765032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 24 12:57:59.765045 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 24 12:57:59.765058 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 24 12:57:59.765071 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 24 12:57:59.765085 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 24 12:57:59.765098 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 24 12:57:59.765124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 24 12:57:59.765138 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 24 12:57:59.765151 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 24 12:57:59.765164 systemd[1]: Created slice user.slice - User and Session Slice. Oct 24 12:57:59.765177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 24 12:57:59.765190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 24 12:57:59.765202 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 24 12:57:59.765221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 24 12:57:59.765234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 24 12:57:59.765248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 24 12:57:59.765261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 24 12:57:59.765274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 24 12:57:59.765287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 24 12:57:59.765302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 24 12:57:59.765318 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 24 12:57:59.765331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 24 12:57:59.765344 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 24 12:57:59.765356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 24 12:57:59.765369 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 24 12:57:59.765382 systemd[1]: Reached target slices.target - Slice Units. Oct 24 12:57:59.765400 systemd[1]: Reached target swap.target - Swaps. Oct 24 12:57:59.765413 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 24 12:57:59.765426 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 24 12:57:59.765438 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 24 12:57:59.765452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 24 12:57:59.765464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 24 12:57:59.765477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 24 12:57:59.765493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 24 12:57:59.765508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 24 12:57:59.765521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 24 12:57:59.765533 systemd[1]: Mounting media.mount - External Media Directory... Oct 24 12:57:59.765546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:57:59.765559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 24 12:57:59.765572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 24 12:57:59.765590 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 24 12:57:59.765603 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 24 12:57:59.765616 systemd[1]: Reached target machines.target - Containers. Oct 24 12:57:59.765628 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 24 12:57:59.765641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 24 12:57:59.765654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 24 12:57:59.765666 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 24 12:57:59.765683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 24 12:57:59.765696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 24 12:57:59.765710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 24 12:57:59.765765 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 24 12:57:59.765779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 24 12:57:59.765807 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 24 12:57:59.765820 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 24 12:57:59.765839 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 24 12:57:59.765851 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 24 12:57:59.765864 systemd[1]: Stopped systemd-fsck-usr.service. Oct 24 12:57:59.765878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 24 12:57:59.765891 kernel: fuse: init (API version 7.41) Oct 24 12:57:59.765903 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 24 12:57:59.765915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 24 12:57:59.765933 kernel: ACPI: bus type drm_connector registered Oct 24 12:57:59.765946 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 24 12:57:59.765959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 24 12:57:59.765972 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 24 12:57:59.765985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 24 12:57:59.766024 systemd-journald[1239]: Collecting audit messages is disabled. Oct 24 12:57:59.766048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:57:59.766061 systemd-journald[1239]: Journal started Oct 24 12:57:59.766090 systemd-journald[1239]: Runtime Journal (/run/log/journal/58f51b0cea2b493a9546c6fd21523ee1) is 6M, max 48.3M, 42.2M free. Oct 24 12:57:59.363006 systemd[1]: Queued start job for default target multi-user.target. Oct 24 12:57:59.389213 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 24 12:57:59.389806 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 24 12:57:59.772050 systemd[1]: Started systemd-journald.service - Journal Service. Oct 24 12:57:59.774710 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 24 12:57:59.776559 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 24 12:57:59.778467 systemd[1]: Mounted media.mount - External Media Directory. Oct 24 12:57:59.780146 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 24 12:57:59.782003 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 24 12:57:59.783911 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 24 12:57:59.785817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 24 12:57:59.788016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 24 12:57:59.790302 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 24 12:57:59.790527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 24 12:57:59.792698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 24 12:57:59.793138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 24 12:57:59.795257 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 24 12:57:59.795473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 24 12:57:59.797442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 24 12:57:59.797656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 24 12:57:59.799893 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 24 12:57:59.800103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 24 12:57:59.802143 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 24 12:57:59.802359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 24 12:57:59.804428 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 24 12:57:59.806642 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 24 12:57:59.809757 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 24 12:57:59.812305 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 24 12:57:59.831375 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 24 12:57:59.834193 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 24 12:57:59.837737 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 24 12:57:59.840810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 24 12:57:59.842726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 24 12:57:59.842849 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 24 12:57:59.844617 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 24 12:57:59.847125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 24 12:57:59.854562 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 24 12:57:59.857724 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 24 12:57:59.859908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 24 12:57:59.861322 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 24 12:57:59.863287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 24 12:57:59.864961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 24 12:57:59.870022 systemd-journald[1239]: Time spent on flushing to /var/log/journal/58f51b0cea2b493a9546c6fd21523ee1 is 17.303ms for 972 entries. Oct 24 12:57:59.870022 systemd-journald[1239]: System Journal (/var/log/journal/58f51b0cea2b493a9546c6fd21523ee1) is 8M, max 163.5M, 155.5M free. Oct 24 12:57:59.912645 systemd-journald[1239]: Received client request to flush runtime journal. Oct 24 12:57:59.912703 kernel: loop1: detected capacity change from 0 to 224512 Oct 24 12:57:59.868924 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 24 12:57:59.873646 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 24 12:57:59.879019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 24 12:57:59.881431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 24 12:57:59.885765 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 24 12:57:59.888350 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 24 12:57:59.893728 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 24 12:57:59.897818 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 24 12:57:59.903387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 24 12:57:59.915484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 24 12:58:00.110886 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 24 12:58:00.113913 kernel: loop2: detected capacity change from 0 to 128048 Oct 24 12:58:00.116698 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 24 12:58:00.119885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 24 12:58:00.122293 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 24 12:58:00.134884 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 24 12:58:00.152146 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Oct 24 12:58:00.152162 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Oct 24 12:58:00.153870 kernel: loop3: detected capacity change from 0 to 110984 Oct 24 12:58:00.158116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 24 12:58:00.182808 kernel: loop4: detected capacity change from 0 to 224512 Oct 24 12:58:00.183984 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 24 12:58:00.194814 kernel: loop5: detected capacity change from 0 to 128048 Oct 24 12:58:00.206836 kernel: loop6: detected capacity change from 0 to 110984 Oct 24 12:58:00.213332 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 24 12:58:00.219386 (sd-merge)[1303]: Merged extensions into '/usr'. Oct 24 12:58:00.227647 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Oct 24 12:58:00.227668 systemd[1]: Reloading... Oct 24 12:58:00.268439 systemd-resolved[1295]: Positive Trust Anchors: Oct 24 12:58:00.268456 systemd-resolved[1295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 24 12:58:00.268461 systemd-resolved[1295]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 24 12:58:00.268492 systemd-resolved[1295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 24 12:58:00.277993 systemd-resolved[1295]: Defaulting to hostname 'linux'. Oct 24 12:58:00.295824 zram_generator::config[1337]: No configuration found. Oct 24 12:58:00.536490 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 24 12:58:00.536619 systemd[1]: Reloading finished in 308 ms. Oct 24 12:58:00.569091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 24 12:58:00.571550 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 24 12:58:00.576893 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 24 12:58:00.589860 systemd[1]: Starting ensure-sysext.service... Oct 24 12:58:00.592734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 24 12:58:00.614767 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Oct 24 12:58:00.614813 systemd[1]: Reloading... Oct 24 12:58:00.623365 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 24 12:58:00.623418 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 24 12:58:00.623865 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 24 12:58:00.624273 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 24 12:58:00.626134 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 24 12:58:00.626554 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 24 12:58:00.626745 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Oct 24 12:58:00.634114 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 24 12:58:00.634132 systemd-tmpfiles[1374]: Skipping /boot Oct 24 12:58:00.648004 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Oct 24 12:58:00.648020 systemd-tmpfiles[1374]: Skipping /boot Oct 24 12:58:00.685833 zram_generator::config[1404]: No configuration found. Oct 24 12:58:00.916841 systemd[1]: Reloading finished in 301 ms. Oct 24 12:58:00.932316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 24 12:58:00.960237 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 24 12:58:00.963144 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 24 12:58:00.982872 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 24 12:58:00.987202 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 24 12:58:00.990828 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 24 12:58:00.998153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:00.998328 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 24 12:58:01.001134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 24 12:58:01.009000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 24 12:58:01.022409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 24 12:58:01.024150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 24 12:58:01.024269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 24 12:58:01.024441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:01.027676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:01.027867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 24 12:58:01.028042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 24 12:58:01.028148 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 24 12:58:01.028244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:01.031151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:01.031366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 24 12:58:01.034995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 24 12:58:01.036964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 24 12:58:01.037074 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 24 12:58:01.037214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 24 12:58:01.043227 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 24 12:58:01.043465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 24 12:58:01.045920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 24 12:58:01.046196 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 24 12:58:01.048515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 24 12:58:01.048736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 24 12:58:01.058672 systemd[1]: Finished ensure-sysext.service. Oct 24 12:58:01.062754 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 24 12:58:01.063111 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 24 12:58:01.069162 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 24 12:58:01.069476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 24 12:58:01.074022 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 24 12:58:01.076994 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 24 12:58:01.080372 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 24 12:58:01.090510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 24 12:58:01.094651 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 24 12:58:01.115841 augenrules[1480]: No rules Oct 24 12:58:01.145237 systemd[1]: audit-rules.service: Deactivated successfully. Oct 24 12:58:01.145714 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 24 12:58:01.168564 systemd-udevd[1476]: Using default interface naming scheme 'v257'. Oct 24 12:58:01.200282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 24 12:58:01.206447 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 24 12:58:01.208999 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 24 12:58:01.211771 systemd[1]: Reached target time-set.target - System Time Set. Oct 24 12:58:01.221873 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 24 12:58:01.224694 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 24 12:58:01.337759 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 24 12:58:01.361018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 24 12:58:01.367597 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 24 12:58:01.372872 kernel: mousedev: PS/2 mouse device common for all mice Oct 24 12:58:01.383338 systemd-networkd[1494]: lo: Link UP Oct 24 12:58:01.383349 systemd-networkd[1494]: lo: Gained carrier Oct 24 12:58:01.385273 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 24 12:58:01.387590 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 24 12:58:01.387602 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 24 12:58:01.388049 systemd[1]: Reached target network.target - Network. Oct 24 12:58:01.389727 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 24 12:58:01.389885 systemd-networkd[1494]: eth0: Link UP Oct 24 12:58:01.390079 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 24 12:58:01.390279 systemd-networkd[1494]: eth0: Gained carrier Oct 24 12:58:01.390339 systemd-networkd[1494]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 24 12:58:01.392127 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 24 12:58:01.400814 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 24 12:58:01.405917 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 24 12:58:01.408830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 24 12:58:01.410069 systemd-timesyncd[1470]: Network configuration changed, trying to establish connection. Oct 24 12:58:01.817560 systemd-timesyncd[1470]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 24 12:58:01.817642 systemd-timesyncd[1470]: Initial clock synchronization to Fri 2025-10-24 12:58:01.817448 UTC. Oct 24 12:58:01.817806 systemd-resolved[1295]: Clock change detected. Flushing caches. Oct 24 12:58:01.822924 kernel: ACPI: button: Power Button [PWRF] Oct 24 12:58:01.835985 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 24 12:58:01.846021 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 24 12:58:01.846357 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 24 12:58:02.117988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 24 12:58:02.168424 kernel: kvm_amd: TSC scaling supported Oct 24 12:58:02.168545 kernel: kvm_amd: Nested Virtualization enabled Oct 24 12:58:02.168564 kernel: kvm_amd: Nested Paging enabled Oct 24 12:58:02.168613 kernel: kvm_amd: LBR virtualization supported Oct 24 12:58:02.170629 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 24 12:58:02.170668 kernel: kvm_amd: Virtual GIF supported Oct 24 12:58:02.245428 kernel: EDAC MC: Ver: 3.0.0 Oct 24 12:58:02.305731 ldconfig[1444]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 24 12:58:02.313355 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 24 12:58:02.352407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 24 12:58:02.358488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 24 12:58:02.393854 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 24 12:58:02.396241 systemd[1]: Reached target sysinit.target - System Initialization. Oct 24 12:58:02.398109 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 24 12:58:02.400371 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 24 12:58:02.402525 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 24 12:58:02.404584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 24 12:58:02.406483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 24 12:58:02.408536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 24 12:58:02.410575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 24 12:58:02.410820 systemd[1]: Reached target paths.target - Path Units. Oct 24 12:58:02.412317 systemd[1]: Reached target timers.target - Timer Units. Oct 24 12:58:02.414824 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 24 12:58:02.418539 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 24 12:58:02.423511 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 24 12:58:02.425784 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 24 12:58:02.427878 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 24 12:58:02.433291 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 24 12:58:02.435241 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 24 12:58:02.437902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 24 12:58:02.440504 systemd[1]: Reached target sockets.target - Socket Units. Oct 24 12:58:02.442072 systemd[1]: Reached target basic.target - Basic System. Oct 24 12:58:02.443613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 24 12:58:02.443648 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 24 12:58:02.444801 systemd[1]: Starting containerd.service - containerd container runtime... Oct 24 12:58:02.447694 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 24 12:58:02.450354 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 24 12:58:02.453362 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 24 12:58:02.456343 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 24 12:58:02.458078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 24 12:58:02.462025 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 24 12:58:02.463869 jq[1559]: false Oct 24 12:58:02.465257 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 24 12:58:02.468360 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 24 12:58:02.473204 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 24 12:58:02.479527 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Oct 24 12:58:02.478754 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 24 12:58:02.477199 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Oct 24 12:58:02.483379 extend-filesystems[1560]: Found /dev/vda6 Oct 24 12:58:02.485583 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Oct 24 12:58:02.485583 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 24 12:58:02.485583 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Oct 24 12:58:02.485106 oslogin_cache_refresh[1561]: Failure getting users, quitting Oct 24 12:58:02.485128 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 24 12:58:02.485188 oslogin_cache_refresh[1561]: Refreshing group entry cache Oct 24 12:58:02.489541 extend-filesystems[1560]: Found /dev/vda9 Oct 24 12:58:02.492766 extend-filesystems[1560]: Checking size of /dev/vda9 Oct 24 12:58:02.494713 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 24 12:58:02.496472 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 24 12:58:02.497090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 24 12:58:02.497738 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Oct 24 12:58:02.497733 oslogin_cache_refresh[1561]: Failure getting groups, quitting Oct 24 12:58:02.497814 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 24 12:58:02.497749 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 24 12:58:02.498856 systemd[1]: Starting update-engine.service - Update Engine... Oct 24 12:58:02.505281 extend-filesystems[1560]: Resized partition /dev/vda9 Oct 24 12:58:02.504057 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 24 12:58:02.510417 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 24 12:58:02.512018 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Oct 24 12:58:02.513486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 24 12:58:02.517624 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 24 12:58:02.518019 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 24 12:58:02.518566 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 24 12:58:02.519957 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 24 12:58:02.521620 jq[1584]: true Oct 24 12:58:02.523520 systemd[1]: motdgen.service: Deactivated successfully. Oct 24 12:58:02.525413 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 24 12:58:02.533136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 24 12:58:02.533463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 24 12:58:02.550456 update_engine[1580]: I20251024 12:58:02.550325 1580 main.cc:92] Flatcar Update Engine starting Oct 24 12:58:02.556107 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 24 12:58:02.589749 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 24 12:58:02.589950 jq[1594]: true Oct 24 12:58:02.592929 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 24 12:58:02.592929 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 24 12:58:02.592929 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 24 12:58:02.599273 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Oct 24 12:58:02.596482 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 24 12:58:02.598628 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 24 12:58:02.620559 tar[1589]: linux-amd64/LICENSE Oct 24 12:58:02.621368 tar[1589]: linux-amd64/helm Oct 24 12:58:02.622431 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Oct 24 12:58:02.622474 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 24 12:58:02.623130 systemd-logind[1573]: New seat seat0. Oct 24 12:58:02.626455 systemd[1]: Started systemd-logind.service - User Login Management. Oct 24 12:58:02.648773 dbus-daemon[1557]: [system] SELinux support is enabled Oct 24 12:58:02.649462 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 24 12:58:02.666868 update_engine[1580]: I20251024 12:58:02.666730 1580 update_check_scheduler.cc:74] Next update check in 3m52s Oct 24 12:58:02.787774 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 24 12:58:02.787831 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 24 12:58:02.790005 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 24 12:58:02.790033 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 24 12:58:02.793762 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 24 12:58:02.794186 systemd[1]: Started update-engine.service - Update Engine. Oct 24 12:58:02.799560 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 24 12:58:02.932520 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 24 12:58:02.944470 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 24 12:58:02.972108 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 24 12:58:02.981994 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 24 12:58:03.025011 systemd[1]: issuegen.service: Deactivated successfully. Oct 24 12:58:03.025659 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 24 12:58:03.030977 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 24 12:58:03.101390 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 24 12:58:03.106629 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 24 12:58:03.112868 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 24 12:58:03.113776 systemd[1]: Reached target getty.target - Login Prompts. Oct 24 12:58:03.215837 systemd-networkd[1494]: eth0: Gained IPv6LL Oct 24 12:58:03.218621 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 24 12:58:03.222152 systemd[1]: Reached target network-online.target - Network is Online. Oct 24 12:58:03.226164 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 24 12:58:03.230654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:03.234963 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 24 12:58:03.250689 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Oct 24 12:58:03.253288 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 24 12:58:03.263960 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 24 12:58:03.296621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 24 12:58:03.312455 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 24 12:58:03.314044 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 24 12:58:03.318122 containerd[1596]: time="2025-10-24T12:58:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 24 12:58:03.318122 containerd[1596]: time="2025-10-24T12:58:03.317382305Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 24 12:58:03.317963 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 24 12:58:03.329240 containerd[1596]: time="2025-10-24T12:58:03.329175638Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.156µs" Oct 24 12:58:03.329376 containerd[1596]: time="2025-10-24T12:58:03.329357008Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 24 12:58:03.329464 containerd[1596]: time="2025-10-24T12:58:03.329450172Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 24 12:58:03.329723 containerd[1596]: time="2025-10-24T12:58:03.329703998Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 24 12:58:03.329784 containerd[1596]: time="2025-10-24T12:58:03.329771545Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 24 12:58:03.329890 containerd[1596]: time="2025-10-24T12:58:03.329874227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330018 containerd[1596]: time="2025-10-24T12:58:03.330000584Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330070 containerd[1596]: time="2025-10-24T12:58:03.330058543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330391 containerd[1596]: time="2025-10-24T12:58:03.330371099Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330446 containerd[1596]: time="2025-10-24T12:58:03.330433586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330500 containerd[1596]: time="2025-10-24T12:58:03.330487267Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330560 containerd[1596]: time="2025-10-24T12:58:03.330546959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 24 12:58:03.330740 containerd[1596]: time="2025-10-24T12:58:03.330722247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 24 12:58:03.331059 containerd[1596]: time="2025-10-24T12:58:03.331040344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 24 12:58:03.331143 containerd[1596]: time="2025-10-24T12:58:03.331128649Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 24 12:58:03.331201 containerd[1596]: time="2025-10-24T12:58:03.331188572Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 24 12:58:03.331279 containerd[1596]: time="2025-10-24T12:58:03.331265546Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 24 12:58:03.331773 containerd[1596]: time="2025-10-24T12:58:03.331731560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 24 12:58:03.331872 containerd[1596]: time="2025-10-24T12:58:03.331846135Z" level=info msg="metadata content store policy set" policy=shared Oct 24 12:58:03.389444 tar[1589]: linux-amd64/README.md Oct 24 12:58:03.419421 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 24 12:58:03.655343 containerd[1596]: time="2025-10-24T12:58:03.655210383Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 24 12:58:03.655343 containerd[1596]: time="2025-10-24T12:58:03.655333463Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655351467Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655370823Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655393005Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655406090Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655426618Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655445003Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655461243Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655479477Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655488514Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 24 12:58:03.655555 containerd[1596]: time="2025-10-24T12:58:03.655526235Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 24 12:58:03.655766 containerd[1596]: time="2025-10-24T12:58:03.655748542Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 24 12:58:03.655806 containerd[1596]: time="2025-10-24T12:58:03.655781383Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 24 12:58:03.655827 containerd[1596]: time="2025-10-24T12:58:03.655806350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 24 12:58:03.655827 containerd[1596]: time="2025-10-24T12:58:03.655819234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 24 12:58:03.655880 containerd[1596]: time="2025-10-24T12:58:03.655835144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 24 12:58:03.655880 containerd[1596]: time="2025-10-24T12:58:03.655845804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 24 12:58:03.655880 containerd[1596]: time="2025-10-24T12:58:03.655856274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 24 12:58:03.655880 containerd[1596]: time="2025-10-24T12:58:03.655877844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 24 12:58:03.656000 containerd[1596]: time="2025-10-24T12:58:03.655891650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 24 12:58:03.656000 containerd[1596]: time="2025-10-24T12:58:03.655913170Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 24 12:58:03.656000 containerd[1596]: time="2025-10-24T12:58:03.655925624Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 24 12:58:03.656134 containerd[1596]: time="2025-10-24T12:58:03.656094771Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 24 12:58:03.656134 containerd[1596]: time="2025-10-24T12:58:03.656116452Z" level=info msg="Start snapshots syncer" Oct 24 12:58:03.656189 containerd[1596]: time="2025-10-24T12:58:03.656177746Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 24 12:58:03.656718 containerd[1596]: time="2025-10-24T12:58:03.656550155Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 24 12:58:03.657036 containerd[1596]: time="2025-10-24T12:58:03.656729060Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 24 12:58:03.657036 containerd[1596]: time="2025-10-24T12:58:03.656870576Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 24 12:58:03.657036 containerd[1596]: time="2025-10-24T12:58:03.657026849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 24 12:58:03.657096 containerd[1596]: time="2025-10-24T12:58:03.657055071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 24 12:58:03.657096 containerd[1596]: time="2025-10-24T12:58:03.657067144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 24 12:58:03.657096 containerd[1596]: time="2025-10-24T12:58:03.657078105Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 24 12:58:03.657158 containerd[1596]: time="2025-10-24T12:58:03.657105346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 24 12:58:03.657158 containerd[1596]: time="2025-10-24T12:58:03.657117188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 24 12:58:03.657158 containerd[1596]: time="2025-10-24T12:58:03.657127337Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 24 12:58:03.657158 containerd[1596]: time="2025-10-24T12:58:03.657155019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 24 12:58:03.657224 containerd[1596]: time="2025-10-24T12:58:03.657174145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 24 12:58:03.657224 containerd[1596]: time="2025-10-24T12:58:03.657188682Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 24 12:58:03.657280 containerd[1596]: time="2025-10-24T12:58:03.657226433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 24 12:58:03.657280 containerd[1596]: time="2025-10-24T12:58:03.657251179Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 24 12:58:03.657280 containerd[1596]: time="2025-10-24T12:58:03.657272569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 24 12:58:03.657349 containerd[1596]: time="2025-10-24T12:58:03.657292627Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 24 12:58:03.657349 containerd[1596]: time="2025-10-24T12:58:03.657300241Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 24 12:58:03.657349 containerd[1596]: time="2025-10-24T12:58:03.657309238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 24 12:58:03.657349 containerd[1596]: time="2025-10-24T12:58:03.657319247Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 24 12:58:03.657416 containerd[1596]: time="2025-10-24T12:58:03.657358801Z" level=info msg="runtime interface created" Oct 24 12:58:03.657416 containerd[1596]: time="2025-10-24T12:58:03.657364772Z" level=info msg="created NRI interface" Oct 24 12:58:03.657416 containerd[1596]: time="2025-10-24T12:58:03.657384229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 24 12:58:03.657416 containerd[1596]: time="2025-10-24T12:58:03.657403585Z" level=info msg="Connect containerd service" Oct 24 12:58:03.657485 containerd[1596]: time="2025-10-24T12:58:03.657430155Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 24 12:58:03.658423 containerd[1596]: time="2025-10-24T12:58:03.658401706Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 24 12:58:04.036400 containerd[1596]: time="2025-10-24T12:58:04.036183778Z" level=info msg="Start subscribing containerd event" Oct 24 12:58:04.036400 containerd[1596]: time="2025-10-24T12:58:04.036296339Z" level=info msg="Start recovering state" Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036520178Z" level=info msg="Start event monitor" Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036551437Z" level=info msg="Start cni network conf syncer for default" Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036568269Z" level=info msg="Start streaming server" Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036643760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036667685Z" level=info msg="runtime interface starting up..." Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036683234Z" level=info msg="starting plugins..." Oct 24 12:58:04.036760 containerd[1596]: time="2025-10-24T12:58:04.036717318Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 24 12:58:04.036999 containerd[1596]: time="2025-10-24T12:58:04.036741313Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 24 12:58:04.036999 containerd[1596]: time="2025-10-24T12:58:04.036886746Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 24 12:58:04.037228 containerd[1596]: time="2025-10-24T12:58:04.037081301Z" level=info msg="containerd successfully booted in 0.723341s" Oct 24 12:58:04.037400 systemd[1]: Started containerd.service - containerd container runtime. Oct 24 12:58:04.372772 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 24 12:58:04.376473 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:38866.service - OpenSSH per-connection server daemon (10.0.0.1:38866). Oct 24 12:58:04.546680 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 38866 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:04.549372 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:04.557966 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 24 12:58:04.563291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 24 12:58:04.572293 systemd-logind[1573]: New session 1 of user core. Oct 24 12:58:04.608230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 24 12:58:04.615884 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 24 12:58:04.700027 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 24 12:58:04.703356 systemd-logind[1573]: New session c1 of user core. Oct 24 12:58:04.886792 systemd[1698]: Queued start job for default target default.target. Oct 24 12:58:04.901034 systemd[1698]: Created slice app.slice - User Application Slice. Oct 24 12:58:04.901064 systemd[1698]: Reached target paths.target - Paths. Oct 24 12:58:04.901111 systemd[1698]: Reached target timers.target - Timers. Oct 24 12:58:04.903027 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 24 12:58:04.920747 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 24 12:58:04.920910 systemd[1698]: Reached target sockets.target - Sockets. Oct 24 12:58:04.920952 systemd[1698]: Reached target basic.target - Basic System. Oct 24 12:58:04.920993 systemd[1698]: Reached target default.target - Main User Target. Oct 24 12:58:04.921417 systemd[1698]: Startup finished in 208ms. Oct 24 12:58:04.923159 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 24 12:58:04.969817 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 24 12:58:05.041015 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:38872.service - OpenSSH per-connection server daemon (10.0.0.1:38872). Oct 24 12:58:05.119427 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:05.121408 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:05.127025 systemd-logind[1573]: New session 2 of user core. Oct 24 12:58:05.137720 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 24 12:58:05.231141 sshd[1712]: Connection closed by 10.0.0.1 port 38872 Oct 24 12:58:05.232071 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:05.241285 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:38872.service: Deactivated successfully. Oct 24 12:58:05.243567 systemd[1]: session-2.scope: Deactivated successfully. Oct 24 12:58:05.244490 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Oct 24 12:58:05.248384 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Oct 24 12:58:05.251314 systemd-logind[1573]: Removed session 2. Oct 24 12:58:05.257138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:05.259910 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 24 12:58:05.262984 systemd[1]: Startup finished in 2.769s (kernel) + 6.589s (initrd) + 6.265s (userspace) = 15.623s. Oct 24 12:58:05.274084 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 24 12:58:05.299932 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:05.301264 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:05.306908 systemd-logind[1573]: New session 3 of user core. Oct 24 12:58:05.317747 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 24 12:58:05.429984 sshd[1728]: Connection closed by 10.0.0.1 port 38874 Oct 24 12:58:05.433180 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:05.437459 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:38874.service: Deactivated successfully. Oct 24 12:58:05.439571 systemd[1]: session-3.scope: Deactivated successfully. Oct 24 12:58:05.441026 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Oct 24 12:58:05.443575 systemd-logind[1573]: Removed session 3. Oct 24 12:58:06.079803 kubelet[1724]: E1024 12:58:06.079712 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 24 12:58:06.083653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 24 12:58:06.083877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 24 12:58:06.084304 systemd[1]: kubelet.service: Consumed 2.424s CPU time, 265.9M memory peak. Oct 24 12:58:15.443087 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Oct 24 12:58:15.504517 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:15.505687 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:15.509731 systemd-logind[1573]: New session 4 of user core. Oct 24 12:58:15.524740 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 24 12:58:15.577733 sshd[1748]: Connection closed by 10.0.0.1 port 45980 Oct 24 12:58:15.578062 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:15.585876 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:45980.service: Deactivated successfully. Oct 24 12:58:15.587449 systemd[1]: session-4.scope: Deactivated successfully. Oct 24 12:58:15.588223 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Oct 24 12:58:15.590646 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:45984.service - OpenSSH per-connection server daemon (10.0.0.1:45984). Oct 24 12:58:15.591237 systemd-logind[1573]: Removed session 4. Oct 24 12:58:15.646560 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 45984 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:15.648359 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:15.653260 systemd-logind[1573]: New session 5 of user core. Oct 24 12:58:15.662749 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 24 12:58:15.714061 sshd[1757]: Connection closed by 10.0.0.1 port 45984 Oct 24 12:58:15.714367 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:15.735613 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:45984.service: Deactivated successfully. Oct 24 12:58:15.737786 systemd[1]: session-5.scope: Deactivated successfully. Oct 24 12:58:15.738674 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Oct 24 12:58:15.741528 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:45992.service - OpenSSH per-connection server daemon (10.0.0.1:45992). Oct 24 12:58:15.742177 systemd-logind[1573]: Removed session 5. Oct 24 12:58:15.803508 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 45992 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:15.805211 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:15.810175 systemd-logind[1573]: New session 6 of user core. Oct 24 12:58:15.819757 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 24 12:58:15.875738 sshd[1766]: Connection closed by 10.0.0.1 port 45992 Oct 24 12:58:15.876155 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:15.888501 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:45992.service: Deactivated successfully. Oct 24 12:58:15.890396 systemd[1]: session-6.scope: Deactivated successfully. Oct 24 12:58:15.891186 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Oct 24 12:58:15.893906 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:46004.service - OpenSSH per-connection server daemon (10.0.0.1:46004). Oct 24 12:58:15.894741 systemd-logind[1573]: Removed session 6. Oct 24 12:58:15.958442 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 46004 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:15.960423 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:15.965409 systemd-logind[1573]: New session 7 of user core. Oct 24 12:58:15.972753 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 24 12:58:16.036471 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 24 12:58:16.036837 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 24 12:58:16.052120 sudo[1776]: pam_unix(sudo:session): session closed for user root Oct 24 12:58:16.054116 sshd[1775]: Connection closed by 10.0.0.1 port 46004 Oct 24 12:58:16.054500 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:16.065851 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:46004.service: Deactivated successfully. Oct 24 12:58:16.067863 systemd[1]: session-7.scope: Deactivated successfully. Oct 24 12:58:16.068765 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Oct 24 12:58:16.071823 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:46006.service - OpenSSH per-connection server daemon (10.0.0.1:46006). Oct 24 12:58:16.072617 systemd-logind[1573]: Removed session 7. Oct 24 12:58:16.084729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 24 12:58:16.086635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:16.128618 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 46006 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:16.130503 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:16.135887 systemd-logind[1573]: New session 8 of user core. Oct 24 12:58:16.147879 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 24 12:58:16.205191 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 24 12:58:16.205567 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 24 12:58:16.404038 sudo[1790]: pam_unix(sudo:session): session closed for user root Oct 24 12:58:16.412306 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 24 12:58:16.412660 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 24 12:58:16.424233 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 24 12:58:16.462292 augenrules[1816]: No rules Oct 24 12:58:16.464034 systemd[1]: audit-rules.service: Deactivated successfully. Oct 24 12:58:16.464314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 24 12:58:16.465268 sudo[1789]: pam_unix(sudo:session): session closed for user root Oct 24 12:58:16.465421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:16.467313 sshd[1788]: Connection closed by 10.0.0.1 port 46006 Oct 24 12:58:16.469371 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:16.479084 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 24 12:58:16.482159 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:46006.service: Deactivated successfully. Oct 24 12:58:16.483882 systemd[1]: session-8.scope: Deactivated successfully. Oct 24 12:58:16.484971 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Oct 24 12:58:16.488159 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:46012.service - OpenSSH per-connection server daemon (10.0.0.1:46012). Oct 24 12:58:16.488876 systemd-logind[1573]: Removed session 8. Oct 24 12:58:16.524434 kubelet[1821]: E1024 12:58:16.524382 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 24 12:58:16.530709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 24 12:58:16.530908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 24 12:58:16.531281 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.8M memory peak. Oct 24 12:58:16.552959 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:58:16.554396 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:58:16.559095 systemd-logind[1573]: New session 9 of user core. Oct 24 12:58:16.573835 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 24 12:58:16.630926 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 24 12:58:16.631246 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 24 12:58:17.013414 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 24 12:58:17.036082 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 24 12:58:17.299704 dockerd[1858]: time="2025-10-24T12:58:17.299510061Z" level=info msg="Starting up" Oct 24 12:58:17.300409 dockerd[1858]: time="2025-10-24T12:58:17.300373300Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 24 12:58:17.314014 dockerd[1858]: time="2025-10-24T12:58:17.313961167Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 24 12:58:17.513380 dockerd[1858]: time="2025-10-24T12:58:17.513313302Z" level=info msg="Loading containers: start." Oct 24 12:58:17.525629 kernel: Initializing XFRM netlink socket Oct 24 12:58:17.800927 systemd-networkd[1494]: docker0: Link UP Oct 24 12:58:17.805536 dockerd[1858]: time="2025-10-24T12:58:17.805478271Z" level=info msg="Loading containers: done." Oct 24 12:58:17.819295 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2051830988-merged.mount: Deactivated successfully. Oct 24 12:58:17.820095 dockerd[1858]: time="2025-10-24T12:58:17.820037900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 24 12:58:17.820164 dockerd[1858]: time="2025-10-24T12:58:17.820136065Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 24 12:58:17.820263 dockerd[1858]: time="2025-10-24T12:58:17.820240510Z" level=info msg="Initializing buildkit" Oct 24 12:58:17.851402 dockerd[1858]: time="2025-10-24T12:58:17.851345973Z" level=info msg="Completed buildkit initialization" Oct 24 12:58:17.855592 dockerd[1858]: time="2025-10-24T12:58:17.855524989Z" level=info msg="Daemon has completed initialization" Oct 24 12:58:17.855691 dockerd[1858]: time="2025-10-24T12:58:17.855647278Z" level=info msg="API listen on /run/docker.sock" Oct 24 12:58:17.855903 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 24 12:58:18.599112 containerd[1596]: time="2025-10-24T12:58:18.599046922Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 24 12:58:19.578259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756871419.mount: Deactivated successfully. Oct 24 12:58:20.786242 containerd[1596]: time="2025-10-24T12:58:20.786182351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:20.787020 containerd[1596]: time="2025-10-24T12:58:20.786975658Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 24 12:58:20.788359 containerd[1596]: time="2025-10-24T12:58:20.788301995Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:20.790700 containerd[1596]: time="2025-10-24T12:58:20.790662341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:20.791627 containerd[1596]: time="2025-10-24T12:58:20.791574701Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.192472586s" Oct 24 12:58:20.791673 containerd[1596]: time="2025-10-24T12:58:20.791634002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 24 12:58:20.792398 containerd[1596]: time="2025-10-24T12:58:20.792199473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 24 12:58:21.816634 containerd[1596]: time="2025-10-24T12:58:21.816563124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:21.817385 containerd[1596]: time="2025-10-24T12:58:21.817335272Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 24 12:58:21.818327 containerd[1596]: time="2025-10-24T12:58:21.818294471Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:21.820893 containerd[1596]: time="2025-10-24T12:58:21.820865281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:21.821693 containerd[1596]: time="2025-10-24T12:58:21.821658708Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.0294291s" Oct 24 12:58:21.821693 containerd[1596]: time="2025-10-24T12:58:21.821693203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 24 12:58:21.822202 containerd[1596]: time="2025-10-24T12:58:21.822145030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 24 12:58:23.145962 containerd[1596]: time="2025-10-24T12:58:23.145876436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:23.147340 containerd[1596]: time="2025-10-24T12:58:23.147298924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 24 12:58:23.148843 containerd[1596]: time="2025-10-24T12:58:23.148778828Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:23.151735 containerd[1596]: time="2025-10-24T12:58:23.151680008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:23.152614 containerd[1596]: time="2025-10-24T12:58:23.152552815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.33038412s" Oct 24 12:58:23.152614 containerd[1596]: time="2025-10-24T12:58:23.152583272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 24 12:58:23.153117 containerd[1596]: time="2025-10-24T12:58:23.153067630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 24 12:58:24.246848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048183032.mount: Deactivated successfully. Oct 24 12:58:24.950021 containerd[1596]: time="2025-10-24T12:58:24.949916691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:24.950650 containerd[1596]: time="2025-10-24T12:58:24.950586788Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 24 12:58:24.951793 containerd[1596]: time="2025-10-24T12:58:24.951760969Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:24.953639 containerd[1596]: time="2025-10-24T12:58:24.953585921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:24.954120 containerd[1596]: time="2025-10-24T12:58:24.954085187Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.800976942s" Oct 24 12:58:24.954120 containerd[1596]: time="2025-10-24T12:58:24.954118600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 24 12:58:24.954644 containerd[1596]: time="2025-10-24T12:58:24.954609490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 24 12:58:25.576219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3712997134.mount: Deactivated successfully. Oct 24 12:58:26.584884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 24 12:58:26.588066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:26.860508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:26.869936 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 24 12:58:27.222185 containerd[1596]: time="2025-10-24T12:58:27.222025575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:27.224565 containerd[1596]: time="2025-10-24T12:58:27.224538026Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 24 12:58:27.225299 containerd[1596]: time="2025-10-24T12:58:27.225267173Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:27.227079 kubelet[2212]: E1024 12:58:27.227023 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 24 12:58:27.229994 containerd[1596]: time="2025-10-24T12:58:27.229941738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:27.231538 containerd[1596]: time="2025-10-24T12:58:27.231105109Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.276470091s" Oct 24 12:58:27.231538 containerd[1596]: time="2025-10-24T12:58:27.231154131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 24 12:58:27.230945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 24 12:58:27.231135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 24 12:58:27.231944 systemd[1]: kubelet.service: Consumed 294ms CPU time, 110.7M memory peak. Oct 24 12:58:27.232061 containerd[1596]: time="2025-10-24T12:58:27.232008192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 24 12:58:27.857241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182748753.mount: Deactivated successfully. Oct 24 12:58:27.863350 containerd[1596]: time="2025-10-24T12:58:27.863306927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 24 12:58:27.864040 containerd[1596]: time="2025-10-24T12:58:27.864019093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 24 12:58:27.865313 containerd[1596]: time="2025-10-24T12:58:27.865249389Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 24 12:58:27.867582 containerd[1596]: time="2025-10-24T12:58:27.867541056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 24 12:58:27.868313 containerd[1596]: time="2025-10-24T12:58:27.868280102Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 636.246912ms" Oct 24 12:58:27.868313 containerd[1596]: time="2025-10-24T12:58:27.868309347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 24 12:58:27.868776 containerd[1596]: time="2025-10-24T12:58:27.868750073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 24 12:58:28.381400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574745035.mount: Deactivated successfully. Oct 24 12:58:31.213537 containerd[1596]: time="2025-10-24T12:58:31.213463109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:31.214349 containerd[1596]: time="2025-10-24T12:58:31.214321288Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 24 12:58:31.215679 containerd[1596]: time="2025-10-24T12:58:31.215635683Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:31.218605 containerd[1596]: time="2025-10-24T12:58:31.218552021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:31.219938 containerd[1596]: time="2025-10-24T12:58:31.219878819Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.351098639s" Oct 24 12:58:31.219938 containerd[1596]: time="2025-10-24T12:58:31.219923232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 24 12:58:32.969964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:32.970130 systemd[1]: kubelet.service: Consumed 294ms CPU time, 110.7M memory peak. Oct 24 12:58:32.972544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:33.000647 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-9.scope)... Oct 24 12:58:33.000665 systemd[1]: Reloading... Oct 24 12:58:33.100632 zram_generator::config[2356]: No configuration found. Oct 24 12:58:33.447038 systemd[1]: Reloading finished in 445 ms. Oct 24 12:58:33.528446 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 24 12:58:33.528580 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 24 12:58:33.528996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:33.529054 systemd[1]: kubelet.service: Consumed 174ms CPU time, 98.4M memory peak. Oct 24 12:58:33.531107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:33.726696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:33.734989 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 24 12:58:33.776810 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 24 12:58:33.776810 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 24 12:58:33.776810 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 24 12:58:33.777233 kubelet[2401]: I1024 12:58:33.776888 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 24 12:58:33.936297 kubelet[2401]: I1024 12:58:33.935360 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 24 12:58:33.936297 kubelet[2401]: I1024 12:58:33.935387 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 24 12:58:33.936297 kubelet[2401]: I1024 12:58:33.935759 2401 server.go:954] "Client rotation is on, will bootstrap in background" Oct 24 12:58:33.964044 kubelet[2401]: E1024 12:58:33.963980 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:33.964527 kubelet[2401]: I1024 12:58:33.964493 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 24 12:58:33.971687 kubelet[2401]: I1024 12:58:33.971656 2401 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 24 12:58:33.978210 kubelet[2401]: I1024 12:58:33.978115 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 24 12:58:33.978463 kubelet[2401]: I1024 12:58:33.978413 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 24 12:58:33.978819 kubelet[2401]: I1024 12:58:33.978452 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 24 12:58:33.979281 kubelet[2401]: I1024 12:58:33.979259 2401 topology_manager.go:138] "Creating topology manager with none policy" Oct 24 12:58:33.979281 kubelet[2401]: I1024 12:58:33.979276 2401 container_manager_linux.go:304] "Creating device plugin manager" Oct 24 12:58:33.979451 kubelet[2401]: I1024 12:58:33.979432 2401 state_mem.go:36] "Initialized new in-memory state store" Oct 24 12:58:33.982244 kubelet[2401]: I1024 12:58:33.982204 2401 kubelet.go:446] "Attempting to sync node with API server" Oct 24 12:58:33.982314 kubelet[2401]: I1024 12:58:33.982251 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 24 12:58:33.982314 kubelet[2401]: I1024 12:58:33.982287 2401 kubelet.go:352] "Adding apiserver pod source" Oct 24 12:58:33.982314 kubelet[2401]: I1024 12:58:33.982307 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 24 12:58:33.990070 kubelet[2401]: W1024 12:58:33.989983 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:33.990070 kubelet[2401]: E1024 12:58:33.990065 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:33.990238 kubelet[2401]: I1024 12:58:33.990177 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 24 12:58:33.990888 kubelet[2401]: W1024 12:58:33.990834 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:33.990888 kubelet[2401]: E1024 12:58:33.990881 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:33.991257 kubelet[2401]: I1024 12:58:33.991225 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 24 12:58:33.991347 kubelet[2401]: W1024 12:58:33.991328 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 24 12:58:33.994586 kubelet[2401]: I1024 12:58:33.994556 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 24 12:58:33.994654 kubelet[2401]: I1024 12:58:33.994608 2401 server.go:1287] "Started kubelet" Oct 24 12:58:33.994754 kubelet[2401]: I1024 12:58:33.994728 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 24 12:58:33.995745 kubelet[2401]: I1024 12:58:33.995728 2401 server.go:479] "Adding debug handlers to kubelet server" Oct 24 12:58:33.998497 kubelet[2401]: I1024 12:58:33.997991 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 24 12:58:33.998497 kubelet[2401]: I1024 12:58:33.998093 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 24 12:58:33.998497 kubelet[2401]: I1024 12:58:33.998170 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 24 12:58:33.998497 kubelet[2401]: I1024 12:58:33.998403 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 24 12:58:34.000609 kubelet[2401]: E1024 12:58:33.999964 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.000609 kubelet[2401]: I1024 12:58:34.000068 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 24 12:58:34.000609 kubelet[2401]: E1024 12:58:34.000442 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Oct 24 12:58:34.000709 kubelet[2401]: I1024 12:58:34.000646 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 24 12:58:34.000709 kubelet[2401]: I1024 12:58:34.000702 2401 reconciler.go:26] "Reconciler: start to sync state" Oct 24 12:58:34.002948 kubelet[2401]: W1024 12:58:34.002902 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:34.003018 kubelet[2401]: E1024 12:58:34.002962 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:34.003042 kubelet[2401]: I1024 12:58:34.003021 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 24 12:58:34.004703 kubelet[2401]: E1024 12:58:34.004675 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 24 12:58:34.004779 kubelet[2401]: E1024 12:58:34.002676 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18716ec336bc6f25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-24 12:58:33.994571557 +0000 UTC m=+0.255596515,LastTimestamp:2025-10-24 12:58:33.994571557 +0000 UTC m=+0.255596515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 24 12:58:34.004863 kubelet[2401]: I1024 12:58:34.004849 2401 factory.go:221] Registration of the containerd container factory successfully Oct 24 12:58:34.004863 kubelet[2401]: I1024 12:58:34.004862 2401 factory.go:221] Registration of the systemd container factory successfully Oct 24 12:58:34.020240 kubelet[2401]: I1024 12:58:34.020200 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 24 12:58:34.020352 kubelet[2401]: I1024 12:58:34.020300 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 24 12:58:34.020352 kubelet[2401]: I1024 12:58:34.020349 2401 state_mem.go:36] "Initialized new in-memory state store" Oct 24 12:58:34.020399 kubelet[2401]: I1024 12:58:34.020276 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 24 12:58:34.022097 kubelet[2401]: I1024 12:58:34.022057 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 24 12:58:34.022097 kubelet[2401]: I1024 12:58:34.022089 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 24 12:58:34.022298 kubelet[2401]: I1024 12:58:34.022111 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 24 12:58:34.022298 kubelet[2401]: I1024 12:58:34.022121 2401 kubelet.go:2382] "Starting kubelet main sync loop" Oct 24 12:58:34.022298 kubelet[2401]: E1024 12:58:34.022168 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 24 12:58:34.025716 kubelet[2401]: W1024 12:58:34.025671 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:34.025826 kubelet[2401]: E1024 12:58:34.025723 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:34.101115 kubelet[2401]: E1024 12:58:34.101059 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.123341 kubelet[2401]: E1024 12:58:34.123273 2401 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 24 12:58:34.200949 kubelet[2401]: E1024 12:58:34.200916 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Oct 24 12:58:34.201917 kubelet[2401]: E1024 12:58:34.201889 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.302267 kubelet[2401]: E1024 12:58:34.302189 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.323418 kubelet[2401]: E1024 12:58:34.323362 2401 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 24 12:58:34.402940 kubelet[2401]: E1024 12:58:34.402886 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.503896 kubelet[2401]: E1024 12:58:34.503831 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:34.524843 kubelet[2401]: I1024 12:58:34.524783 2401 policy_none.go:49] "None policy: Start" Oct 24 12:58:34.524843 kubelet[2401]: I1024 12:58:34.524830 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 24 12:58:34.524843 kubelet[2401]: I1024 12:58:34.524856 2401 state_mem.go:35] "Initializing new in-memory state store" Oct 24 12:58:34.532090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 24 12:58:34.543344 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 24 12:58:34.548836 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 24 12:58:34.562631 kubelet[2401]: I1024 12:58:34.562509 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 24 12:58:34.562794 kubelet[2401]: I1024 12:58:34.562774 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 24 12:58:34.562842 kubelet[2401]: I1024 12:58:34.562793 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 24 12:58:34.563078 kubelet[2401]: I1024 12:58:34.563058 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 24 12:58:34.563933 kubelet[2401]: E1024 12:58:34.563899 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 24 12:58:34.563976 kubelet[2401]: E1024 12:58:34.563963 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 24 12:58:34.601983 kubelet[2401]: E1024 12:58:34.601915 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Oct 24 12:58:34.664401 kubelet[2401]: I1024 12:58:34.664352 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 24 12:58:34.664836 kubelet[2401]: E1024 12:58:34.664796 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 24 12:58:34.732637 systemd[1]: Created slice kubepods-burstable-podaab785e49d065a2cf2af03f8b10aebb3.slice - libcontainer container kubepods-burstable-podaab785e49d065a2cf2af03f8b10aebb3.slice. Oct 24 12:58:34.756506 kubelet[2401]: E1024 12:58:34.756468 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:34.759386 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 24 12:58:34.761576 kubelet[2401]: E1024 12:58:34.761554 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:34.764274 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 24 12:58:34.766269 kubelet[2401]: E1024 12:58:34.766247 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:34.804771 kubelet[2401]: I1024 12:58:34.804721 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:34.804771 kubelet[2401]: I1024 12:58:34.804756 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:34.805165 kubelet[2401]: I1024 12:58:34.804782 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:34.805165 kubelet[2401]: I1024 12:58:34.804859 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:34.805165 kubelet[2401]: I1024 12:58:34.804890 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:34.805165 kubelet[2401]: I1024 12:58:34.804910 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:34.805165 kubelet[2401]: I1024 12:58:34.804932 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:34.805282 kubelet[2401]: I1024 12:58:34.804954 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:34.805282 kubelet[2401]: I1024 12:58:34.804977 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:34.867282 kubelet[2401]: I1024 12:58:34.867227 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 24 12:58:34.867765 kubelet[2401]: E1024 12:58:34.867723 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 24 12:58:35.057669 kubelet[2401]: E1024 12:58:35.057621 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.058444 containerd[1596]: time="2025-10-24T12:58:35.058389271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aab785e49d065a2cf2af03f8b10aebb3,Namespace:kube-system,Attempt:0,}" Oct 24 12:58:35.062551 kubelet[2401]: E1024 12:58:35.062503 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.062951 containerd[1596]: time="2025-10-24T12:58:35.062900654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 24 12:58:35.067167 kubelet[2401]: E1024 12:58:35.067138 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.067486 containerd[1596]: time="2025-10-24T12:58:35.067449328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 24 12:58:35.096037 containerd[1596]: time="2025-10-24T12:58:35.095957501Z" level=info msg="connecting to shim 13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959" address="unix:///run/containerd/s/964394688ffff0e55bc3bb0546eb29d8da3999c86d51d92e50757b1f5b460811" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:35.119374 containerd[1596]: time="2025-10-24T12:58:35.119191931Z" level=info msg="connecting to shim 1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c" address="unix:///run/containerd/s/299c312402853d5136d1c0570a9018aa0b9f30f3d407414ad4b7f5351c0fdd3b" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:35.147919 kubelet[2401]: W1024 12:58:35.147866 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:35.148045 kubelet[2401]: E1024 12:58:35.147926 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:35.150789 systemd[1]: Started cri-containerd-1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c.scope - libcontainer container 1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c. Oct 24 12:58:35.155127 systemd[1]: Started cri-containerd-13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959.scope - libcontainer container 13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959. Oct 24 12:58:35.168898 containerd[1596]: time="2025-10-24T12:58:35.168835943Z" level=info msg="connecting to shim 12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b" address="unix:///run/containerd/s/1ed9124faf9f2206adb2b571f070f509b21f6769abfc6d6f61c5c347abeee6d7" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:35.405779 kubelet[2401]: E1024 12:58:35.405237 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Oct 24 12:58:35.405779 kubelet[2401]: W1024 12:58:35.405537 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:35.405779 kubelet[2401]: E1024 12:58:35.405650 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:35.406972 kubelet[2401]: I1024 12:58:35.406937 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 24 12:58:35.407368 kubelet[2401]: E1024 12:58:35.407279 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 24 12:58:35.431784 kubelet[2401]: W1024 12:58:35.431713 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:35.431784 kubelet[2401]: E1024 12:58:35.431763 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:35.479168 kubelet[2401]: W1024 12:58:35.478976 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Oct 24 12:58:35.479168 kubelet[2401]: E1024 12:58:35.479116 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Oct 24 12:58:35.484009 systemd[1]: Started cri-containerd-12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b.scope - libcontainer container 12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b. Oct 24 12:58:35.487215 containerd[1596]: time="2025-10-24T12:58:35.487182954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aab785e49d065a2cf2af03f8b10aebb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959\"" Oct 24 12:58:35.489076 kubelet[2401]: E1024 12:58:35.489038 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.490984 containerd[1596]: time="2025-10-24T12:58:35.490960620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c\"" Oct 24 12:58:35.492738 kubelet[2401]: E1024 12:58:35.492140 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.492785 containerd[1596]: time="2025-10-24T12:58:35.492233500Z" level=info msg="CreateContainer within sandbox \"13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 24 12:58:35.494214 containerd[1596]: time="2025-10-24T12:58:35.494179465Z" level=info msg="CreateContainer within sandbox \"1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 24 12:58:35.503694 containerd[1596]: time="2025-10-24T12:58:35.503649661Z" level=info msg="Container 854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:58:35.511777 containerd[1596]: time="2025-10-24T12:58:35.511243194Z" level=info msg="Container f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:58:35.518442 containerd[1596]: time="2025-10-24T12:58:35.518386781Z" level=info msg="CreateContainer within sandbox \"13305cb949b595429aca44088ad454b4ff0b679c687add0649c7e63a692a7959\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e\"" Oct 24 12:58:35.520409 containerd[1596]: time="2025-10-24T12:58:35.518929049Z" level=info msg="StartContainer for \"854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e\"" Oct 24 12:58:35.520409 containerd[1596]: time="2025-10-24T12:58:35.520038654Z" level=info msg="connecting to shim 854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e" address="unix:///run/containerd/s/964394688ffff0e55bc3bb0546eb29d8da3999c86d51d92e50757b1f5b460811" protocol=ttrpc version=3 Oct 24 12:58:35.521589 containerd[1596]: time="2025-10-24T12:58:35.521567345Z" level=info msg="CreateContainer within sandbox \"1df5a2b14ccf908e6b2122cbe7581c5cda842379f8f474d5b89c26a59f81674c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756\"" Oct 24 12:58:35.522060 containerd[1596]: time="2025-10-24T12:58:35.522042407Z" level=info msg="StartContainer for \"f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756\"" Oct 24 12:58:35.523040 containerd[1596]: time="2025-10-24T12:58:35.523018351Z" level=info msg="connecting to shim f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756" address="unix:///run/containerd/s/299c312402853d5136d1c0570a9018aa0b9f30f3d407414ad4b7f5351c0fdd3b" protocol=ttrpc version=3 Oct 24 12:58:35.534052 containerd[1596]: time="2025-10-24T12:58:35.533992773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b\"" Oct 24 12:58:35.536140 kubelet[2401]: E1024 12:58:35.536107 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:35.538034 containerd[1596]: time="2025-10-24T12:58:35.538003005Z" level=info msg="CreateContainer within sandbox \"12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 24 12:58:35.542779 systemd[1]: Started cri-containerd-854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e.scope - libcontainer container 854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e. Oct 24 12:58:35.546705 systemd[1]: Started cri-containerd-f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756.scope - libcontainer container f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756. Oct 24 12:58:35.549758 containerd[1596]: time="2025-10-24T12:58:35.549723227Z" level=info msg="Container 42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:58:35.557946 containerd[1596]: time="2025-10-24T12:58:35.557888674Z" level=info msg="CreateContainer within sandbox \"12c5734fc120cf6bc11247d2701c28ab72084ff8634a050d5e4eb0520249a37b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e\"" Oct 24 12:58:35.558501 containerd[1596]: time="2025-10-24T12:58:35.558468362Z" level=info msg="StartContainer for \"42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e\"" Oct 24 12:58:35.560015 containerd[1596]: time="2025-10-24T12:58:35.559974050Z" level=info msg="connecting to shim 42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e" address="unix:///run/containerd/s/1ed9124faf9f2206adb2b571f070f509b21f6769abfc6d6f61c5c347abeee6d7" protocol=ttrpc version=3 Oct 24 12:58:35.589018 systemd[1]: Started cri-containerd-42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e.scope - libcontainer container 42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e. Oct 24 12:58:35.616121 containerd[1596]: time="2025-10-24T12:58:35.616071113Z" level=info msg="StartContainer for \"f3b074a6eab4e2866e7f54c991e568e573f82e9b3123915c2c2cfbf117843756\" returns successfully" Oct 24 12:58:35.630897 containerd[1596]: time="2025-10-24T12:58:35.630798975Z" level=info msg="StartContainer for \"854f7e1d1abca32346b02b0a29829cf605b4abf9e7fe54c770936c56b892802e\" returns successfully" Oct 24 12:58:35.726414 containerd[1596]: time="2025-10-24T12:58:35.726268066Z" level=info msg="StartContainer for \"42b46d0e8255b23daac31d7b7939586635df8fd16e74cb0eb0c0d7e59261c66e\" returns successfully" Oct 24 12:58:36.043104 kubelet[2401]: E1024 12:58:36.042951 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:36.044391 kubelet[2401]: E1024 12:58:36.044314 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:36.050550 kubelet[2401]: E1024 12:58:36.050512 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:36.050690 kubelet[2401]: E1024 12:58:36.050663 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:36.051012 kubelet[2401]: E1024 12:58:36.050966 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:36.051210 kubelet[2401]: E1024 12:58:36.051186 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:36.208793 kubelet[2401]: I1024 12:58:36.208734 2401 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 24 12:58:37.053203 kubelet[2401]: E1024 12:58:37.053144 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:37.053684 kubelet[2401]: E1024 12:58:37.053334 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:37.054405 kubelet[2401]: E1024 12:58:37.054376 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:37.054507 kubelet[2401]: E1024 12:58:37.054482 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:37.059010 kubelet[2401]: E1024 12:58:37.058982 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 24 12:58:37.157706 kubelet[2401]: I1024 12:58:37.157563 2401 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 24 12:58:37.157706 kubelet[2401]: E1024 12:58:37.157623 2401 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 24 12:58:37.169181 kubelet[2401]: E1024 12:58:37.169136 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.269920 kubelet[2401]: E1024 12:58:37.269811 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.370550 kubelet[2401]: E1024 12:58:37.370503 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.471209 kubelet[2401]: E1024 12:58:37.471141 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.571415 kubelet[2401]: E1024 12:58:37.571352 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.672580 kubelet[2401]: E1024 12:58:37.672430 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.772925 kubelet[2401]: E1024 12:58:37.772857 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.873565 kubelet[2401]: E1024 12:58:37.873512 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:37.974101 kubelet[2401]: E1024 12:58:37.973939 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.074926 kubelet[2401]: E1024 12:58:38.074837 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.175469 kubelet[2401]: E1024 12:58:38.175404 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.276338 kubelet[2401]: E1024 12:58:38.276190 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.285890 kubelet[2401]: E1024 12:58:38.285845 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 24 12:58:38.286072 kubelet[2401]: E1024 12:58:38.285988 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:38.376531 kubelet[2401]: E1024 12:58:38.376477 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.477377 kubelet[2401]: E1024 12:58:38.477309 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.578667 kubelet[2401]: E1024 12:58:38.578457 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:38.600980 kubelet[2401]: I1024 12:58:38.600926 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:38.609004 kubelet[2401]: I1024 12:58:38.608962 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:38.613033 kubelet[2401]: I1024 12:58:38.612987 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:38.990789 kubelet[2401]: I1024 12:58:38.990752 2401 apiserver.go:52] "Watching apiserver" Oct 24 12:58:38.992945 kubelet[2401]: E1024 12:58:38.992880 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:38.993089 kubelet[2401]: E1024 12:58:38.993011 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:39.001555 kubelet[2401]: I1024 12:58:39.001502 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 24 12:58:39.054037 kubelet[2401]: E1024 12:58:39.054002 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:39.188325 systemd[1]: Reload requested from client PID 2682 ('systemctl') (unit session-9.scope)... Oct 24 12:58:39.188343 systemd[1]: Reloading... Oct 24 12:58:39.272649 zram_generator::config[2726]: No configuration found. Oct 24 12:58:39.349990 kubelet[2401]: E1024 12:58:39.349957 2401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:39.655289 systemd[1]: Reloading finished in 466 ms. Oct 24 12:58:39.692469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:39.713969 systemd[1]: kubelet.service: Deactivated successfully. Oct 24 12:58:39.714302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:39.714361 systemd[1]: kubelet.service: Consumed 743ms CPU time, 129.5M memory peak. Oct 24 12:58:39.716432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 24 12:58:39.956643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 24 12:58:39.967930 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 24 12:58:40.023157 kubelet[2771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 24 12:58:40.023157 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 24 12:58:40.023157 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 24 12:58:40.023157 kubelet[2771]: I1024 12:58:40.023115 2771 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 24 12:58:40.031400 kubelet[2771]: I1024 12:58:40.031352 2771 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 24 12:58:40.031400 kubelet[2771]: I1024 12:58:40.031381 2771 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 24 12:58:40.031802 kubelet[2771]: I1024 12:58:40.031772 2771 server.go:954] "Client rotation is on, will bootstrap in background" Oct 24 12:58:40.033145 kubelet[2771]: I1024 12:58:40.033096 2771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 24 12:58:40.035423 kubelet[2771]: I1024 12:58:40.035386 2771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 24 12:58:40.040023 kubelet[2771]: I1024 12:58:40.039969 2771 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 24 12:58:40.045455 kubelet[2771]: I1024 12:58:40.045421 2771 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 24 12:58:40.045725 kubelet[2771]: I1024 12:58:40.045686 2771 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 24 12:58:40.045897 kubelet[2771]: I1024 12:58:40.045720 2771 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 24 12:58:40.045976 kubelet[2771]: I1024 12:58:40.045908 2771 topology_manager.go:138] "Creating topology manager with none policy" Oct 24 12:58:40.045976 kubelet[2771]: I1024 12:58:40.045916 2771 container_manager_linux.go:304] "Creating device plugin manager" Oct 24 12:58:40.045976 kubelet[2771]: I1024 12:58:40.045973 2771 state_mem.go:36] "Initialized new in-memory state store" Oct 24 12:58:40.046150 kubelet[2771]: I1024 12:58:40.046138 2771 kubelet.go:446] "Attempting to sync node with API server" Oct 24 12:58:40.046172 kubelet[2771]: I1024 12:58:40.046161 2771 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 24 12:58:40.046196 kubelet[2771]: I1024 12:58:40.046184 2771 kubelet.go:352] "Adding apiserver pod source" Oct 24 12:58:40.046196 kubelet[2771]: I1024 12:58:40.046194 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 24 12:58:40.047219 kubelet[2771]: I1024 12:58:40.046885 2771 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 24 12:58:40.047484 kubelet[2771]: I1024 12:58:40.047463 2771 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 24 12:58:40.048783 kubelet[2771]: I1024 12:58:40.048764 2771 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 24 12:58:40.048839 kubelet[2771]: I1024 12:58:40.048800 2771 server.go:1287] "Started kubelet" Oct 24 12:58:40.050464 kubelet[2771]: I1024 12:58:40.050410 2771 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 24 12:58:40.051400 kubelet[2771]: I1024 12:58:40.051381 2771 server.go:479] "Adding debug handlers to kubelet server" Oct 24 12:58:40.053509 kubelet[2771]: I1024 12:58:40.053435 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 24 12:58:40.055749 kubelet[2771]: I1024 12:58:40.055684 2771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 24 12:58:40.056039 kubelet[2771]: I1024 12:58:40.056005 2771 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 24 12:58:40.056328 kubelet[2771]: I1024 12:58:40.056287 2771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 24 12:58:40.057691 kubelet[2771]: E1024 12:58:40.057656 2771 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 24 12:58:40.057737 kubelet[2771]: I1024 12:58:40.057702 2771 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 24 12:58:40.057906 kubelet[2771]: I1024 12:58:40.057878 2771 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 24 12:58:40.058037 kubelet[2771]: I1024 12:58:40.058022 2771 reconciler.go:26] "Reconciler: start to sync state" Oct 24 12:58:40.059433 kubelet[2771]: I1024 12:58:40.059403 2771 factory.go:221] Registration of the systemd container factory successfully Oct 24 12:58:40.059525 kubelet[2771]: I1024 12:58:40.059497 2771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 24 12:58:40.064292 kubelet[2771]: I1024 12:58:40.064240 2771 factory.go:221] Registration of the containerd container factory successfully Oct 24 12:58:40.068943 kubelet[2771]: E1024 12:58:40.068824 2771 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 24 12:58:40.074642 kubelet[2771]: I1024 12:58:40.074540 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 24 12:58:40.076004 kubelet[2771]: I1024 12:58:40.075971 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 24 12:58:40.076004 kubelet[2771]: I1024 12:58:40.076007 2771 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 24 12:58:40.076103 kubelet[2771]: I1024 12:58:40.076036 2771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 24 12:58:40.076103 kubelet[2771]: I1024 12:58:40.076046 2771 kubelet.go:2382] "Starting kubelet main sync loop" Oct 24 12:58:40.076177 kubelet[2771]: E1024 12:58:40.076112 2771 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 24 12:58:40.105577 kubelet[2771]: I1024 12:58:40.105536 2771 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 24 12:58:40.105577 kubelet[2771]: I1024 12:58:40.105563 2771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 24 12:58:40.105765 kubelet[2771]: I1024 12:58:40.105633 2771 state_mem.go:36] "Initialized new in-memory state store" Oct 24 12:58:40.105918 kubelet[2771]: I1024 12:58:40.105896 2771 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 24 12:58:40.105945 kubelet[2771]: I1024 12:58:40.105916 2771 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 24 12:58:40.105945 kubelet[2771]: I1024 12:58:40.105941 2771 policy_none.go:49] "None policy: Start" Oct 24 12:58:40.105993 kubelet[2771]: I1024 12:58:40.105954 2771 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 24 12:58:40.105993 kubelet[2771]: I1024 12:58:40.105969 2771 state_mem.go:35] "Initializing new in-memory state store" Oct 24 12:58:40.106186 kubelet[2771]: I1024 12:58:40.106161 2771 state_mem.go:75] "Updated machine memory state" Oct 24 12:58:40.111906 kubelet[2771]: I1024 12:58:40.111764 2771 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 24 12:58:40.111993 kubelet[2771]: I1024 12:58:40.111971 2771 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 24 12:58:40.112060 kubelet[2771]: I1024 12:58:40.111994 2771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 24 12:58:40.112381 kubelet[2771]: I1024 12:58:40.112354 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 24 12:58:40.113581 kubelet[2771]: E1024 12:58:40.113540 2771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 24 12:58:40.177534 kubelet[2771]: I1024 12:58:40.177399 2771 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:40.177698 kubelet[2771]: I1024 12:58:40.177628 2771 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.177880 kubelet[2771]: I1024 12:58:40.177823 2771 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:40.184484 kubelet[2771]: E1024 12:58:40.184418 2771 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.184648 kubelet[2771]: E1024 12:58:40.184621 2771 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:40.184695 kubelet[2771]: E1024 12:58:40.184684 2771 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:40.217664 kubelet[2771]: I1024 12:58:40.217538 2771 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 24 12:58:40.293075 kubelet[2771]: I1024 12:58:40.293032 2771 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 24 12:58:40.293222 kubelet[2771]: I1024 12:58:40.293140 2771 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 24 12:58:40.359170 kubelet[2771]: I1024 12:58:40.359113 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:40.359170 kubelet[2771]: I1024 12:58:40.359166 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.359381 kubelet[2771]: I1024 12:58:40.359187 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.359381 kubelet[2771]: I1024 12:58:40.359205 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.359381 kubelet[2771]: I1024 12:58:40.359231 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.359381 kubelet[2771]: I1024 12:58:40.359247 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:40.359381 kubelet[2771]: I1024 12:58:40.359263 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:40.359491 kubelet[2771]: I1024 12:58:40.359278 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aab785e49d065a2cf2af03f8b10aebb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aab785e49d065a2cf2af03f8b10aebb3\") " pod="kube-system/kube-apiserver-localhost" Oct 24 12:58:40.359491 kubelet[2771]: I1024 12:58:40.359348 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:40.485477 kubelet[2771]: E1024 12:58:40.485067 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:40.485477 kubelet[2771]: E1024 12:58:40.485150 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:40.485477 kubelet[2771]: E1024 12:58:40.485153 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:41.046899 kubelet[2771]: I1024 12:58:41.046832 2771 apiserver.go:52] "Watching apiserver" Oct 24 12:58:41.058626 kubelet[2771]: I1024 12:58:41.058560 2771 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 24 12:58:41.087621 kubelet[2771]: I1024 12:58:41.087536 2771 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:41.088920 kubelet[2771]: E1024 12:58:41.087979 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:41.088920 kubelet[2771]: I1024 12:58:41.088152 2771 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:41.124320 kubelet[2771]: E1024 12:58:41.124271 2771 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 24 12:58:41.124545 kubelet[2771]: E1024 12:58:41.124510 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:41.124671 kubelet[2771]: E1024 12:58:41.124660 2771 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 24 12:58:41.124803 kubelet[2771]: E1024 12:58:41.124776 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:41.154751 kubelet[2771]: I1024 12:58:41.154341 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.154305767 podStartE2EDuration="3.154305767s" podCreationTimestamp="2025-10-24 12:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:58:41.144207901 +0000 UTC m=+1.171996089" watchObservedRunningTime="2025-10-24 12:58:41.154305767 +0000 UTC m=+1.182093955" Oct 24 12:58:41.156614 kubelet[2771]: I1024 12:58:41.155068 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.1550603329999998 podStartE2EDuration="3.155060333s" podCreationTimestamp="2025-10-24 12:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:58:41.154076867 +0000 UTC m=+1.181865055" watchObservedRunningTime="2025-10-24 12:58:41.155060333 +0000 UTC m=+1.182848521" Oct 24 12:58:41.337838 kubelet[2771]: I1024 12:58:41.337758 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.337735456 podStartE2EDuration="3.337735456s" podCreationTimestamp="2025-10-24 12:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:58:41.324484773 +0000 UTC m=+1.352272962" watchObservedRunningTime="2025-10-24 12:58:41.337735456 +0000 UTC m=+1.365523645" Oct 24 12:58:42.090630 kubelet[2771]: E1024 12:58:42.089206 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:42.090630 kubelet[2771]: E1024 12:58:42.089326 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:42.090630 kubelet[2771]: E1024 12:58:42.089326 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:43.089856 kubelet[2771]: E1024 12:58:43.089811 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:46.141848 kubelet[2771]: I1024 12:58:46.141804 2771 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 24 12:58:46.142519 kubelet[2771]: I1024 12:58:46.142262 2771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 24 12:58:46.142562 containerd[1596]: time="2025-10-24T12:58:46.142092512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 24 12:58:46.990455 systemd[1]: Created slice kubepods-besteffort-pod550e8720_a1b6_4de0_a4d0_4f553463936b.slice - libcontainer container kubepods-besteffort-pod550e8720_a1b6_4de0_a4d0_4f553463936b.slice. Oct 24 12:58:46.999527 kubelet[2771]: I1024 12:58:46.999488 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/550e8720-a1b6-4de0-a4d0-4f553463936b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sjn9l\" (UID: \"550e8720-a1b6-4de0-a4d0-4f553463936b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sjn9l" Oct 24 12:58:46.999527 kubelet[2771]: I1024 12:58:46.999528 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtrl\" (UniqueName: \"kubernetes.io/projected/550e8720-a1b6-4de0-a4d0-4f553463936b-kube-api-access-wvtrl\") pod \"tigera-operator-7dcd859c48-sjn9l\" (UID: \"550e8720-a1b6-4de0-a4d0-4f553463936b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sjn9l" Oct 24 12:58:47.020533 systemd[1]: Created slice kubepods-besteffort-pode37732eb_104a_43a5_a160_149546234929.slice - libcontainer container kubepods-besteffort-pode37732eb_104a_43a5_a160_149546234929.slice. Oct 24 12:58:47.099944 kubelet[2771]: I1024 12:58:47.099891 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2m4\" (UniqueName: \"kubernetes.io/projected/e37732eb-104a-43a5-a160-149546234929-kube-api-access-7f2m4\") pod \"kube-proxy-zjzqn\" (UID: \"e37732eb-104a-43a5-a160-149546234929\") " pod="kube-system/kube-proxy-zjzqn" Oct 24 12:58:47.099944 kubelet[2771]: I1024 12:58:47.099931 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e37732eb-104a-43a5-a160-149546234929-xtables-lock\") pod \"kube-proxy-zjzqn\" (UID: \"e37732eb-104a-43a5-a160-149546234929\") " pod="kube-system/kube-proxy-zjzqn" Oct 24 12:58:47.099944 kubelet[2771]: I1024 12:58:47.099948 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e37732eb-104a-43a5-a160-149546234929-kube-proxy\") pod \"kube-proxy-zjzqn\" (UID: \"e37732eb-104a-43a5-a160-149546234929\") " pod="kube-system/kube-proxy-zjzqn" Oct 24 12:58:47.099944 kubelet[2771]: I1024 12:58:47.099960 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e37732eb-104a-43a5-a160-149546234929-lib-modules\") pod \"kube-proxy-zjzqn\" (UID: \"e37732eb-104a-43a5-a160-149546234929\") " pod="kube-system/kube-proxy-zjzqn" Oct 24 12:58:47.301308 containerd[1596]: time="2025-10-24T12:58:47.301196940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sjn9l,Uid:550e8720-a1b6-4de0-a4d0-4f553463936b,Namespace:tigera-operator,Attempt:0,}" Oct 24 12:58:47.323549 kubelet[2771]: E1024 12:58:47.323507 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:47.323923 containerd[1596]: time="2025-10-24T12:58:47.323887248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjzqn,Uid:e37732eb-104a-43a5-a160-149546234929,Namespace:kube-system,Attempt:0,}" Oct 24 12:58:47.647261 update_engine[1580]: I20251024 12:58:47.647199 1580 update_attempter.cc:509] Updating boot flags... Oct 24 12:58:47.721719 containerd[1596]: time="2025-10-24T12:58:47.720912487Z" level=info msg="connecting to shim 91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802" address="unix:///run/containerd/s/03445b603df7d5a3d692b52bf46dec0bb00429b28c531c58d5333907b514d9d7" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:47.726846 containerd[1596]: time="2025-10-24T12:58:47.726799844Z" level=info msg="connecting to shim d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637" address="unix:///run/containerd/s/2dd11e94baef2202cb8af635a388bd4e051f4b18f8c94ddb48c05364a9b23e95" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:47.782810 systemd[1]: Started cri-containerd-91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802.scope - libcontainer container 91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802. Oct 24 12:58:47.787793 systemd[1]: Started cri-containerd-d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637.scope - libcontainer container d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637. Oct 24 12:58:47.816663 containerd[1596]: time="2025-10-24T12:58:47.816540731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjzqn,Uid:e37732eb-104a-43a5-a160-149546234929,Namespace:kube-system,Attempt:0,} returns sandbox id \"d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637\"" Oct 24 12:58:47.818083 kubelet[2771]: E1024 12:58:47.818041 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:47.821821 containerd[1596]: time="2025-10-24T12:58:47.821773721Z" level=info msg="CreateContainer within sandbox \"d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 24 12:58:47.835930 containerd[1596]: time="2025-10-24T12:58:47.835876013Z" level=info msg="Container 4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:58:47.836487 containerd[1596]: time="2025-10-24T12:58:47.836453467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sjn9l,Uid:550e8720-a1b6-4de0-a4d0-4f553463936b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802\"" Oct 24 12:58:47.838193 containerd[1596]: time="2025-10-24T12:58:47.838146223Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 24 12:58:47.846737 containerd[1596]: time="2025-10-24T12:58:47.846684786Z" level=info msg="CreateContainer within sandbox \"d916e1972381e200719f1ebeaf60cb4d796528821f44aee3c1f51210826df637\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd\"" Oct 24 12:58:47.847483 containerd[1596]: time="2025-10-24T12:58:47.847435254Z" level=info msg="StartContainer for \"4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd\"" Oct 24 12:58:47.848981 containerd[1596]: time="2025-10-24T12:58:47.848941360Z" level=info msg="connecting to shim 4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd" address="unix:///run/containerd/s/2dd11e94baef2202cb8af635a388bd4e051f4b18f8c94ddb48c05364a9b23e95" protocol=ttrpc version=3 Oct 24 12:58:47.881757 systemd[1]: Started cri-containerd-4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd.scope - libcontainer container 4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd. Oct 24 12:58:47.929038 containerd[1596]: time="2025-10-24T12:58:47.928910891Z" level=info msg="StartContainer for \"4dd0cc7c568f6f979cfbacd5c12928712d173890fc7bb7f861f57b8dcfbe5bcd\" returns successfully" Oct 24 12:58:48.097834 kubelet[2771]: E1024 12:58:48.097798 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:48.105858 kubelet[2771]: I1024 12:58:48.105800 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zjzqn" podStartSLOduration=1.105782457 podStartE2EDuration="1.105782457s" podCreationTimestamp="2025-10-24 12:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:58:48.105700203 +0000 UTC m=+8.133488391" watchObservedRunningTime="2025-10-24 12:58:48.105782457 +0000 UTC m=+8.133570645" Oct 24 12:58:49.576910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402393508.mount: Deactivated successfully. Oct 24 12:58:49.880991 kubelet[2771]: E1024 12:58:49.880718 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:49.942649 containerd[1596]: time="2025-10-24T12:58:49.942570848Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:49.943348 containerd[1596]: time="2025-10-24T12:58:49.943284117Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 24 12:58:49.944694 containerd[1596]: time="2025-10-24T12:58:49.944627477Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:49.946545 containerd[1596]: time="2025-10-24T12:58:49.946504759Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:58:49.947118 containerd[1596]: time="2025-10-24T12:58:49.947081351Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.108752515s" Oct 24 12:58:49.947118 containerd[1596]: time="2025-10-24T12:58:49.947113071Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 24 12:58:49.949139 containerd[1596]: time="2025-10-24T12:58:49.949111761Z" level=info msg="CreateContainer within sandbox \"91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 24 12:58:49.957002 containerd[1596]: time="2025-10-24T12:58:49.956968703Z" level=info msg="Container 3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:58:49.962137 containerd[1596]: time="2025-10-24T12:58:49.962093248Z" level=info msg="CreateContainer within sandbox \"91de15ef3096e0a3c9f3e2d1be8a26df6de080dc35724fdcae5df73c59cff802\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad\"" Oct 24 12:58:49.963620 containerd[1596]: time="2025-10-24T12:58:49.962660142Z" level=info msg="StartContainer for \"3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad\"" Oct 24 12:58:49.963729 containerd[1596]: time="2025-10-24T12:58:49.963699723Z" level=info msg="connecting to shim 3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad" address="unix:///run/containerd/s/03445b603df7d5a3d692b52bf46dec0bb00429b28c531c58d5333907b514d9d7" protocol=ttrpc version=3 Oct 24 12:58:50.028760 systemd[1]: Started cri-containerd-3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad.scope - libcontainer container 3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad. Oct 24 12:58:50.060819 containerd[1596]: time="2025-10-24T12:58:50.060736167Z" level=info msg="StartContainer for \"3389409622381487ff2bd6572c3f8504abea606314a962e1a0ca540abc6817ad\" returns successfully" Oct 24 12:58:50.103020 kubelet[2771]: E1024 12:58:50.102982 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:50.114395 kubelet[2771]: I1024 12:58:50.114315 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sjn9l" podStartSLOduration=2.003861355 podStartE2EDuration="4.114296818s" podCreationTimestamp="2025-10-24 12:58:46 +0000 UTC" firstStartedPulling="2025-10-24 12:58:47.837411084 +0000 UTC m=+7.865199272" lastFinishedPulling="2025-10-24 12:58:49.947846547 +0000 UTC m=+9.975634735" observedRunningTime="2025-10-24 12:58:50.114220836 +0000 UTC m=+10.142009024" watchObservedRunningTime="2025-10-24 12:58:50.114296818 +0000 UTC m=+10.142085006" Oct 24 12:58:51.037121 kubelet[2771]: E1024 12:58:51.037079 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:51.104304 kubelet[2771]: E1024 12:58:51.104267 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:51.901228 kubelet[2771]: E1024 12:58:51.901180 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:52.106622 kubelet[2771]: E1024 12:58:52.106302 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:55.225425 sudo[1838]: pam_unix(sudo:session): session closed for user root Oct 24 12:58:55.228050 sshd[1837]: Connection closed by 10.0.0.1 port 46012 Oct 24 12:58:55.229933 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Oct 24 12:58:55.237036 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:46012.service: Deactivated successfully. Oct 24 12:58:55.243534 systemd[1]: session-9.scope: Deactivated successfully. Oct 24 12:58:55.244356 systemd[1]: session-9.scope: Consumed 4.211s CPU time, 218.3M memory peak. Oct 24 12:58:55.247891 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Oct 24 12:58:55.257745 systemd-logind[1573]: Removed session 9. Oct 24 12:58:59.375979 kubelet[2771]: I1024 12:58:59.375922 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f34e422d-fd1a-49c9-b2cf-1248325747cc-typha-certs\") pod \"calico-typha-b546847-tg5m8\" (UID: \"f34e422d-fd1a-49c9-b2cf-1248325747cc\") " pod="calico-system/calico-typha-b546847-tg5m8" Oct 24 12:58:59.375979 kubelet[2771]: I1024 12:58:59.375976 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f34e422d-fd1a-49c9-b2cf-1248325747cc-tigera-ca-bundle\") pod \"calico-typha-b546847-tg5m8\" (UID: \"f34e422d-fd1a-49c9-b2cf-1248325747cc\") " pod="calico-system/calico-typha-b546847-tg5m8" Oct 24 12:58:59.376659 kubelet[2771]: I1024 12:58:59.376029 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqnq\" (UniqueName: \"kubernetes.io/projected/f34e422d-fd1a-49c9-b2cf-1248325747cc-kube-api-access-hvqnq\") pod \"calico-typha-b546847-tg5m8\" (UID: \"f34e422d-fd1a-49c9-b2cf-1248325747cc\") " pod="calico-system/calico-typha-b546847-tg5m8" Oct 24 12:58:59.377663 systemd[1]: Created slice kubepods-besteffort-podf34e422d_fd1a_49c9_b2cf_1248325747cc.slice - libcontainer container kubepods-besteffort-podf34e422d_fd1a_49c9_b2cf_1248325747cc.slice. Oct 24 12:58:59.554823 systemd[1]: Created slice kubepods-besteffort-podd43f2849_408d_4d9a_9b69_8f891ef9e6ad.slice - libcontainer container kubepods-besteffort-podd43f2849_408d_4d9a_9b69_8f891ef9e6ad.slice. Oct 24 12:58:59.577748 kubelet[2771]: I1024 12:58:59.577690 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-cni-net-dir\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577748 kubelet[2771]: I1024 12:58:59.577748 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7h8p\" (UniqueName: \"kubernetes.io/projected/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-kube-api-access-g7h8p\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577992 kubelet[2771]: I1024 12:58:59.577786 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-lib-modules\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577992 kubelet[2771]: I1024 12:58:59.577829 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-tigera-ca-bundle\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577992 kubelet[2771]: I1024 12:58:59.577853 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-cni-bin-dir\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577992 kubelet[2771]: I1024 12:58:59.577876 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-node-certs\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.577992 kubelet[2771]: I1024 12:58:59.577911 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-policysync\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.578131 kubelet[2771]: I1024 12:58:59.577936 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-var-lib-calico\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.578131 kubelet[2771]: I1024 12:58:59.577971 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-var-run-calico\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.578131 kubelet[2771]: I1024 12:58:59.578016 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-xtables-lock\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.578131 kubelet[2771]: I1024 12:58:59.578041 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-cni-log-dir\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.578131 kubelet[2771]: I1024 12:58:59.578063 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d43f2849-408d-4d9a-9b69-8f891ef9e6ad-flexvol-driver-host\") pod \"calico-node-4fs67\" (UID: \"d43f2849-408d-4d9a-9b69-8f891ef9e6ad\") " pod="calico-system/calico-node-4fs67" Oct 24 12:58:59.681401 kubelet[2771]: E1024 12:58:59.680678 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.681401 kubelet[2771]: W1024 12:58:59.680704 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.681401 kubelet[2771]: E1024 12:58:59.680756 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.681788 kubelet[2771]: E1024 12:58:59.681772 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.681861 kubelet[2771]: W1024 12:58:59.681849 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.681957 kubelet[2771]: E1024 12:58:59.681911 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.686462 kubelet[2771]: E1024 12:58:59.686361 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.686462 kubelet[2771]: W1024 12:58:59.686384 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.686462 kubelet[2771]: E1024 12:58:59.686407 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.691289 kubelet[2771]: E1024 12:58:59.691242 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:59.692177 containerd[1596]: time="2025-10-24T12:58:59.692139029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b546847-tg5m8,Uid:f34e422d-fd1a-49c9-b2cf-1248325747cc,Namespace:calico-system,Attempt:0,}" Oct 24 12:58:59.695453 kubelet[2771]: E1024 12:58:59.695278 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.695453 kubelet[2771]: W1024 12:58:59.695446 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.695553 kubelet[2771]: E1024 12:58:59.695462 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.716490 containerd[1596]: time="2025-10-24T12:58:59.716430061Z" level=info msg="connecting to shim e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48" address="unix:///run/containerd/s/72ded71378989bfe59365ecf758a42edffc6bfe0d73d84cca55a4b27d7e2e0db" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:59.761656 kubelet[2771]: E1024 12:58:59.761581 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:58:59.761868 systemd[1]: Started cri-containerd-e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48.scope - libcontainer container e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48. Oct 24 12:58:59.769621 kubelet[2771]: E1024 12:58:59.768804 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.769621 kubelet[2771]: W1024 12:58:59.768827 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.769621 kubelet[2771]: E1024 12:58:59.768848 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.770363 kubelet[2771]: E1024 12:58:59.770340 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.770363 kubelet[2771]: W1024 12:58:59.770358 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.770363 kubelet[2771]: E1024 12:58:59.770370 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.770840 kubelet[2771]: E1024 12:58:59.770822 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.770884 kubelet[2771]: W1024 12:58:59.770836 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.770998 kubelet[2771]: E1024 12:58:59.770957 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.772582 kubelet[2771]: E1024 12:58:59.771741 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.772582 kubelet[2771]: W1024 12:58:59.771828 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.772754 kubelet[2771]: E1024 12:58:59.772688 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.773204 kubelet[2771]: E1024 12:58:59.773149 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.773447 kubelet[2771]: W1024 12:58:59.773376 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.773646 kubelet[2771]: E1024 12:58:59.773633 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.774721 kubelet[2771]: E1024 12:58:59.774657 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.774721 kubelet[2771]: W1024 12:58:59.774672 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.774721 kubelet[2771]: E1024 12:58:59.774682 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.775689 kubelet[2771]: E1024 12:58:59.775668 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.775763 kubelet[2771]: W1024 12:58:59.775740 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.775903 kubelet[2771]: E1024 12:58:59.775823 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.776939 kubelet[2771]: E1024 12:58:59.776893 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.776939 kubelet[2771]: W1024 12:58:59.776915 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.777027 kubelet[2771]: E1024 12:58:59.777014 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.778435 kubelet[2771]: E1024 12:58:59.778366 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.778435 kubelet[2771]: W1024 12:58:59.778381 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.778435 kubelet[2771]: E1024 12:58:59.778392 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.779730 kubelet[2771]: E1024 12:58:59.779717 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.779804 kubelet[2771]: W1024 12:58:59.779786 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.779866 kubelet[2771]: E1024 12:58:59.779855 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.781415 kubelet[2771]: E1024 12:58:59.781400 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.781541 kubelet[2771]: W1024 12:58:59.781482 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.781541 kubelet[2771]: E1024 12:58:59.781496 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.782163 kubelet[2771]: E1024 12:58:59.781854 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.782306 kubelet[2771]: W1024 12:58:59.782218 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.782306 kubelet[2771]: E1024 12:58:59.782239 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.783753 kubelet[2771]: E1024 12:58:59.782674 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.783753 kubelet[2771]: W1024 12:58:59.783645 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.783753 kubelet[2771]: E1024 12:58:59.783663 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.783932 kubelet[2771]: E1024 12:58:59.783918 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.783989 kubelet[2771]: W1024 12:58:59.783978 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.784044 kubelet[2771]: E1024 12:58:59.784033 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.784340 kubelet[2771]: E1024 12:58:59.784250 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.784340 kubelet[2771]: W1024 12:58:59.784261 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.784340 kubelet[2771]: E1024 12:58:59.784270 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.784482 kubelet[2771]: E1024 12:58:59.784469 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.784537 kubelet[2771]: W1024 12:58:59.784524 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.785640 kubelet[2771]: E1024 12:58:59.785622 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.786028 kubelet[2771]: E1024 12:58:59.785971 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.786028 kubelet[2771]: W1024 12:58:59.785983 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.786028 kubelet[2771]: E1024 12:58:59.785993 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.786343 kubelet[2771]: E1024 12:58:59.786287 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.786343 kubelet[2771]: W1024 12:58:59.786298 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.786343 kubelet[2771]: E1024 12:58:59.786308 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.786614 kubelet[2771]: E1024 12:58:59.786580 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.786669 kubelet[2771]: W1024 12:58:59.786657 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.786770 kubelet[2771]: E1024 12:58:59.786727 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.786996 kubelet[2771]: E1024 12:58:59.786984 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.788659 kubelet[2771]: W1024 12:58:59.788620 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.788659 kubelet[2771]: E1024 12:58:59.788636 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.789108 kubelet[2771]: E1024 12:58:59.789067 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.789108 kubelet[2771]: W1024 12:58:59.789080 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.789108 kubelet[2771]: E1024 12:58:59.789090 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.789281 kubelet[2771]: I1024 12:58:59.789227 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntf8h\" (UniqueName: \"kubernetes.io/projected/a5ee21c4-521a-4300-831d-bec9b2d7f45e-kube-api-access-ntf8h\") pod \"csi-node-driver-s2tsg\" (UID: \"a5ee21c4-521a-4300-831d-bec9b2d7f45e\") " pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:58:59.789572 kubelet[2771]: E1024 12:58:59.789545 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.789572 kubelet[2771]: W1024 12:58:59.789558 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.789724 kubelet[2771]: E1024 12:58:59.789672 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.789724 kubelet[2771]: I1024 12:58:59.789697 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5ee21c4-521a-4300-831d-bec9b2d7f45e-registration-dir\") pod \"csi-node-driver-s2tsg\" (UID: \"a5ee21c4-521a-4300-831d-bec9b2d7f45e\") " pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:58:59.790140 kubelet[2771]: E1024 12:58:59.790107 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.790140 kubelet[2771]: W1024 12:58:59.790124 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.790289 kubelet[2771]: E1024 12:58:59.790243 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.790587 kubelet[2771]: E1024 12:58:59.790561 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.790587 kubelet[2771]: W1024 12:58:59.790573 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.790768 kubelet[2771]: E1024 12:58:59.790716 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.791047 kubelet[2771]: E1024 12:58:59.791034 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.791121 kubelet[2771]: W1024 12:58:59.791094 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.791301 kubelet[2771]: E1024 12:58:59.791277 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.791651 kubelet[2771]: E1024 12:58:59.791636 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.791791 kubelet[2771]: W1024 12:58:59.791708 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.791791 kubelet[2771]: E1024 12:58:59.791737 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.791791 kubelet[2771]: I1024 12:58:59.791764 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5ee21c4-521a-4300-831d-bec9b2d7f45e-varrun\") pod \"csi-node-driver-s2tsg\" (UID: \"a5ee21c4-521a-4300-831d-bec9b2d7f45e\") " pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:58:59.792866 kubelet[2771]: E1024 12:58:59.792829 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.792866 kubelet[2771]: W1024 12:58:59.792842 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.792866 kubelet[2771]: E1024 12:58:59.792853 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.793308 kubelet[2771]: E1024 12:58:59.793283 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.793308 kubelet[2771]: W1024 12:58:59.793295 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.793404 kubelet[2771]: E1024 12:58:59.793391 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.793702 kubelet[2771]: E1024 12:58:59.793689 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.793769 kubelet[2771]: W1024 12:58:59.793757 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.793830 kubelet[2771]: E1024 12:58:59.793819 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.794056 kubelet[2771]: I1024 12:58:59.794043 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5ee21c4-521a-4300-831d-bec9b2d7f45e-kubelet-dir\") pod \"csi-node-driver-s2tsg\" (UID: \"a5ee21c4-521a-4300-831d-bec9b2d7f45e\") " pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:58:59.794395 kubelet[2771]: E1024 12:58:59.794353 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.794432 kubelet[2771]: W1024 12:58:59.794394 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.794490 kubelet[2771]: E1024 12:58:59.794458 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.795679 kubelet[2771]: E1024 12:58:59.795634 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.795679 kubelet[2771]: W1024 12:58:59.795650 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.795806 kubelet[2771]: E1024 12:58:59.795762 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.796098 kubelet[2771]: E1024 12:58:59.796073 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.796098 kubelet[2771]: W1024 12:58:59.796084 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.796296 kubelet[2771]: E1024 12:58:59.796218 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.796421 kubelet[2771]: E1024 12:58:59.796409 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.796480 kubelet[2771]: W1024 12:58:59.796468 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.796531 kubelet[2771]: E1024 12:58:59.796518 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.797672 kubelet[2771]: I1024 12:58:59.797620 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5ee21c4-521a-4300-831d-bec9b2d7f45e-socket-dir\") pod \"csi-node-driver-s2tsg\" (UID: \"a5ee21c4-521a-4300-831d-bec9b2d7f45e\") " pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:58:59.798007 kubelet[2771]: E1024 12:58:59.797945 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.798007 kubelet[2771]: W1024 12:58:59.797959 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.798007 kubelet[2771]: E1024 12:58:59.797969 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.798276 kubelet[2771]: E1024 12:58:59.798264 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.798338 kubelet[2771]: W1024 12:58:59.798327 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.798393 kubelet[2771]: E1024 12:58:59.798382 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.860788 kubelet[2771]: E1024 12:58:59.860712 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:59.863093 containerd[1596]: time="2025-10-24T12:58:59.863056917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4fs67,Uid:d43f2849-408d-4d9a-9b69-8f891ef9e6ad,Namespace:calico-system,Attempt:0,}" Oct 24 12:58:59.891561 containerd[1596]: time="2025-10-24T12:58:59.891443460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b546847-tg5m8,Uid:f34e422d-fd1a-49c9-b2cf-1248325747cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48\"" Oct 24 12:58:59.892349 containerd[1596]: time="2025-10-24T12:58:59.892244984Z" level=info msg="connecting to shim 3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e" address="unix:///run/containerd/s/a58ea5c8a1520a48b87acd3f907f2f2b2fc8a7924855c543fdd1167c2c535c00" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:58:59.892481 kubelet[2771]: E1024 12:58:59.892446 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:58:59.894561 containerd[1596]: time="2025-10-24T12:58:59.894523828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 24 12:58:59.899159 kubelet[2771]: E1024 12:58:59.899138 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.899366 kubelet[2771]: W1024 12:58:59.899292 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.899366 kubelet[2771]: E1024 12:58:59.899323 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.899767 kubelet[2771]: E1024 12:58:59.899755 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.899863 kubelet[2771]: W1024 12:58:59.899841 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.900103 kubelet[2771]: E1024 12:58:59.900083 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.900376 kubelet[2771]: E1024 12:58:59.900334 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.900376 kubelet[2771]: W1024 12:58:59.900362 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.900545 kubelet[2771]: E1024 12:58:59.900438 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.900912 kubelet[2771]: E1024 12:58:59.900876 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.900912 kubelet[2771]: W1024 12:58:59.900889 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.901112 kubelet[2771]: E1024 12:58:59.901045 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.901874 kubelet[2771]: E1024 12:58:59.901850 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.902255 kubelet[2771]: W1024 12:58:59.902101 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.902255 kubelet[2771]: E1024 12:58:59.902142 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.902406 kubelet[2771]: E1024 12:58:59.902392 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.902660 kubelet[2771]: W1024 12:58:59.902638 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.902795 kubelet[2771]: E1024 12:58:59.902704 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.903173 kubelet[2771]: E1024 12:58:59.903160 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.903979 kubelet[2771]: W1024 12:58:59.903961 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.904107 kubelet[2771]: E1024 12:58:59.904083 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.904391 kubelet[2771]: E1024 12:58:59.904355 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.904391 kubelet[2771]: W1024 12:58:59.904366 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.905736 kubelet[2771]: E1024 12:58:59.905692 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.906055 kubelet[2771]: E1024 12:58:59.906019 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.906055 kubelet[2771]: W1024 12:58:59.906031 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.906231 kubelet[2771]: E1024 12:58:59.906181 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.907664 kubelet[2771]: E1024 12:58:59.907649 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.907727 kubelet[2771]: W1024 12:58:59.907715 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.909548 kubelet[2771]: E1024 12:58:59.907925 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.909662 kubelet[2771]: E1024 12:58:59.909649 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.909853 kubelet[2771]: W1024 12:58:59.909841 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.910022 kubelet[2771]: E1024 12:58:59.909980 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.910206 kubelet[2771]: E1024 12:58:59.910162 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.910206 kubelet[2771]: W1024 12:58:59.910176 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.910411 kubelet[2771]: E1024 12:58:59.910396 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.910714 kubelet[2771]: E1024 12:58:59.910674 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.911335 kubelet[2771]: W1024 12:58:59.911207 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.911527 kubelet[2771]: E1024 12:58:59.911507 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.912324 kubelet[2771]: E1024 12:58:59.912297 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.912324 kubelet[2771]: W1024 12:58:59.912310 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.912758 kubelet[2771]: E1024 12:58:59.912720 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.912866 kubelet[2771]: E1024 12:58:59.912855 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.913004 kubelet[2771]: W1024 12:58:59.912927 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.913088 kubelet[2771]: E1024 12:58:59.913073 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.913815 kubelet[2771]: E1024 12:58:59.913782 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.913815 kubelet[2771]: W1024 12:58:59.913809 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.913933 kubelet[2771]: E1024 12:58:59.913848 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.914169 kubelet[2771]: E1024 12:58:59.914155 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.914308 kubelet[2771]: W1024 12:58:59.914228 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.914308 kubelet[2771]: E1024 12:58:59.914248 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.914735 kubelet[2771]: E1024 12:58:59.914722 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.914794 kubelet[2771]: W1024 12:58:59.914782 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.914914 kubelet[2771]: E1024 12:58:59.914857 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.915145 kubelet[2771]: E1024 12:58:59.915133 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.915211 kubelet[2771]: W1024 12:58:59.915198 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.915323 kubelet[2771]: E1024 12:58:59.915299 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.915542 kubelet[2771]: E1024 12:58:59.915517 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.915542 kubelet[2771]: W1024 12:58:59.915529 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.915725 kubelet[2771]: E1024 12:58:59.915707 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.916100 kubelet[2771]: E1024 12:58:59.916034 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.916100 kubelet[2771]: W1024 12:58:59.916057 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.916382 kubelet[2771]: E1024 12:58:59.916249 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.916624 kubelet[2771]: E1024 12:58:59.916496 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.916624 kubelet[2771]: W1024 12:58:59.916529 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.916624 kubelet[2771]: E1024 12:58:59.916544 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.916989 kubelet[2771]: E1024 12:58:59.916962 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.916989 kubelet[2771]: W1024 12:58:59.916974 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.917133 kubelet[2771]: E1024 12:58:59.917110 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.917360 kubelet[2771]: E1024 12:58:59.917341 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.917420 kubelet[2771]: W1024 12:58:59.917408 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.917477 kubelet[2771]: E1024 12:58:59.917467 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.917739 kubelet[2771]: E1024 12:58:59.917727 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.917948 kubelet[2771]: W1024 12:58:59.917803 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.917948 kubelet[2771]: E1024 12:58:59.917817 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.922906 systemd[1]: Started cri-containerd-3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e.scope - libcontainer container 3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e. Oct 24 12:58:59.928559 kubelet[2771]: E1024 12:58:59.928536 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:58:59.928732 kubelet[2771]: W1024 12:58:59.928715 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:58:59.928811 kubelet[2771]: E1024 12:58:59.928799 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:58:59.978363 containerd[1596]: time="2025-10-24T12:58:59.978232018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4fs67,Uid:d43f2849-408d-4d9a-9b69-8f891ef9e6ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\"" Oct 24 12:58:59.979337 kubelet[2771]: E1024 12:58:59.979307 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:01.076478 kubelet[2771]: E1024 12:59:01.076418 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:01.504229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355283896.mount: Deactivated successfully. Oct 24 12:59:01.834615 containerd[1596]: time="2025-10-24T12:59:01.834553213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:01.835415 containerd[1596]: time="2025-10-24T12:59:01.835325722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 24 12:59:01.836422 containerd[1596]: time="2025-10-24T12:59:01.836387383Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:01.838568 containerd[1596]: time="2025-10-24T12:59:01.838534540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:01.838993 containerd[1596]: time="2025-10-24T12:59:01.838945872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.944128513s" Oct 24 12:59:01.839028 containerd[1596]: time="2025-10-24T12:59:01.838991076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 24 12:59:01.839992 containerd[1596]: time="2025-10-24T12:59:01.839935017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 24 12:59:01.850422 containerd[1596]: time="2025-10-24T12:59:01.850363697Z" level=info msg="CreateContainer within sandbox \"e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 24 12:59:01.858110 containerd[1596]: time="2025-10-24T12:59:01.858062137Z" level=info msg="Container 37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:01.864362 containerd[1596]: time="2025-10-24T12:59:01.864320485Z" level=info msg="CreateContainer within sandbox \"e4970d08147445563f83488b11ba42240677839e938e84026f6603b49a3f9c48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55\"" Oct 24 12:59:01.864795 containerd[1596]: time="2025-10-24T12:59:01.864756142Z" level=info msg="StartContainer for \"37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55\"" Oct 24 12:59:01.865815 containerd[1596]: time="2025-10-24T12:59:01.865779031Z" level=info msg="connecting to shim 37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55" address="unix:///run/containerd/s/72ded71378989bfe59365ecf758a42edffc6bfe0d73d84cca55a4b27d7e2e0db" protocol=ttrpc version=3 Oct 24 12:59:01.885742 systemd[1]: Started cri-containerd-37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55.scope - libcontainer container 37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55. Oct 24 12:59:01.935026 containerd[1596]: time="2025-10-24T12:59:01.934980514Z" level=info msg="StartContainer for \"37cfa3a45331dcdb18f7a2790399403efb0b7a74669f9cfb2e0388634787ad55\" returns successfully" Oct 24 12:59:02.133322 kubelet[2771]: E1024 12:59:02.132410 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:02.145704 kubelet[2771]: I1024 12:59:02.145630 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b546847-tg5m8" podStartSLOduration=1.199897381 podStartE2EDuration="3.145614033s" podCreationTimestamp="2025-10-24 12:58:59 +0000 UTC" firstStartedPulling="2025-10-24 12:58:59.894009774 +0000 UTC m=+19.921797962" lastFinishedPulling="2025-10-24 12:59:01.839726426 +0000 UTC m=+21.867514614" observedRunningTime="2025-10-24 12:59:02.144916826 +0000 UTC m=+22.172705004" watchObservedRunningTime="2025-10-24 12:59:02.145614033 +0000 UTC m=+22.173402211" Oct 24 12:59:02.206242 kubelet[2771]: E1024 12:59:02.206178 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.206242 kubelet[2771]: W1024 12:59:02.206211 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.206242 kubelet[2771]: E1024 12:59:02.206236 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.206541 kubelet[2771]: E1024 12:59:02.206514 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.206541 kubelet[2771]: W1024 12:59:02.206526 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.206541 kubelet[2771]: E1024 12:59:02.206535 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.206772 kubelet[2771]: E1024 12:59:02.206749 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.206772 kubelet[2771]: W1024 12:59:02.206761 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.206772 kubelet[2771]: E1024 12:59:02.206770 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.207040 kubelet[2771]: E1024 12:59:02.207023 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.207040 kubelet[2771]: W1024 12:59:02.207035 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.207093 kubelet[2771]: E1024 12:59:02.207047 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.207254 kubelet[2771]: E1024 12:59:02.207231 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.207254 kubelet[2771]: W1024 12:59:02.207243 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.207254 kubelet[2771]: E1024 12:59:02.207252 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.207439 kubelet[2771]: E1024 12:59:02.207423 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.207439 kubelet[2771]: W1024 12:59:02.207434 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.207505 kubelet[2771]: E1024 12:59:02.207443 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.207645 kubelet[2771]: E1024 12:59:02.207630 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.207645 kubelet[2771]: W1024 12:59:02.207641 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.207697 kubelet[2771]: E1024 12:59:02.207649 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.207843 kubelet[2771]: E1024 12:59:02.207828 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.207843 kubelet[2771]: W1024 12:59:02.207839 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.207917 kubelet[2771]: E1024 12:59:02.207849 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208045 kubelet[2771]: E1024 12:59:02.208029 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208045 kubelet[2771]: W1024 12:59:02.208039 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.208102 kubelet[2771]: E1024 12:59:02.208047 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208235 kubelet[2771]: E1024 12:59:02.208220 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208235 kubelet[2771]: W1024 12:59:02.208231 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.208280 kubelet[2771]: E1024 12:59:02.208239 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208412 kubelet[2771]: E1024 12:59:02.208396 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208412 kubelet[2771]: W1024 12:59:02.208407 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.208474 kubelet[2771]: E1024 12:59:02.208415 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208621 kubelet[2771]: E1024 12:59:02.208576 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208621 kubelet[2771]: W1024 12:59:02.208587 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.208621 kubelet[2771]: E1024 12:59:02.208616 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208802 kubelet[2771]: E1024 12:59:02.208785 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208802 kubelet[2771]: W1024 12:59:02.208796 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.208802 kubelet[2771]: E1024 12:59:02.208804 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.208984 kubelet[2771]: E1024 12:59:02.208969 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.208984 kubelet[2771]: W1024 12:59:02.208980 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.209034 kubelet[2771]: E1024 12:59:02.208988 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.209165 kubelet[2771]: E1024 12:59:02.209151 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.209165 kubelet[2771]: W1024 12:59:02.209161 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.209231 kubelet[2771]: E1024 12:59:02.209168 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.230783 kubelet[2771]: E1024 12:59:02.230734 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.230783 kubelet[2771]: W1024 12:59:02.230758 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.230783 kubelet[2771]: E1024 12:59:02.230789 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.231285 kubelet[2771]: E1024 12:59:02.231245 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.231285 kubelet[2771]: W1024 12:59:02.231260 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.231285 kubelet[2771]: E1024 12:59:02.231278 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.231512 kubelet[2771]: E1024 12:59:02.231494 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.231512 kubelet[2771]: W1024 12:59:02.231505 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.231659 kubelet[2771]: E1024 12:59:02.231518 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.231910 kubelet[2771]: E1024 12:59:02.231890 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.231910 kubelet[2771]: W1024 12:59:02.231909 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.231989 kubelet[2771]: E1024 12:59:02.231928 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.232108 kubelet[2771]: E1024 12:59:02.232092 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.232108 kubelet[2771]: W1024 12:59:02.232104 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.232174 kubelet[2771]: E1024 12:59:02.232116 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.232326 kubelet[2771]: E1024 12:59:02.232307 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.232326 kubelet[2771]: W1024 12:59:02.232318 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.232377 kubelet[2771]: E1024 12:59:02.232334 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.232751 kubelet[2771]: E1024 12:59:02.232727 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.232751 kubelet[2771]: W1024 12:59:02.232741 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.232884 kubelet[2771]: E1024 12:59:02.232797 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.232933 kubelet[2771]: E1024 12:59:02.232925 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.232965 kubelet[2771]: W1024 12:59:02.232937 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.233272 kubelet[2771]: E1024 12:59:02.233241 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.233272 kubelet[2771]: E1024 12:59:02.233263 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.233272 kubelet[2771]: W1024 12:59:02.233274 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.233369 kubelet[2771]: E1024 12:59:02.233294 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.233627 kubelet[2771]: E1024 12:59:02.233570 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.233627 kubelet[2771]: W1024 12:59:02.233618 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.233728 kubelet[2771]: E1024 12:59:02.233653 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.233920 kubelet[2771]: E1024 12:59:02.233898 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.233920 kubelet[2771]: W1024 12:59:02.233911 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.233996 kubelet[2771]: E1024 12:59:02.233928 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.234168 kubelet[2771]: E1024 12:59:02.234148 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.234168 kubelet[2771]: W1024 12:59:02.234159 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.234248 kubelet[2771]: E1024 12:59:02.234173 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.234501 kubelet[2771]: E1024 12:59:02.234470 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.234501 kubelet[2771]: W1024 12:59:02.234488 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.234501 kubelet[2771]: E1024 12:59:02.234506 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.234798 kubelet[2771]: E1024 12:59:02.234764 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.234830 kubelet[2771]: W1024 12:59:02.234798 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.234863 kubelet[2771]: E1024 12:59:02.234837 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.235211 kubelet[2771]: E1024 12:59:02.235182 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.235211 kubelet[2771]: W1024 12:59:02.235200 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.235265 kubelet[2771]: E1024 12:59:02.235219 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.235442 kubelet[2771]: E1024 12:59:02.235424 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.235442 kubelet[2771]: W1024 12:59:02.235438 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.235493 kubelet[2771]: E1024 12:59:02.235453 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.235705 kubelet[2771]: E1024 12:59:02.235685 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.235705 kubelet[2771]: W1024 12:59:02.235702 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.235761 kubelet[2771]: E1024 12:59:02.235737 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:02.235947 kubelet[2771]: E1024 12:59:02.235928 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:02.235947 kubelet[2771]: W1024 12:59:02.235942 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:02.235996 kubelet[2771]: E1024 12:59:02.235954 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.076511 kubelet[2771]: E1024 12:59:03.076442 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:03.134464 kubelet[2771]: I1024 12:59:03.134429 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 24 12:59:03.134946 kubelet[2771]: E1024 12:59:03.134873 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:03.219647 kubelet[2771]: E1024 12:59:03.219518 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.219647 kubelet[2771]: W1024 12:59:03.219556 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.219647 kubelet[2771]: E1024 12:59:03.219580 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.221115 kubelet[2771]: E1024 12:59:03.221084 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.221115 kubelet[2771]: W1024 12:59:03.221108 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.221193 kubelet[2771]: E1024 12:59:03.221119 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.221530 kubelet[2771]: E1024 12:59:03.221290 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.221530 kubelet[2771]: W1024 12:59:03.221323 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.221530 kubelet[2771]: E1024 12:59:03.221333 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.221705 kubelet[2771]: E1024 12:59:03.221566 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.221705 kubelet[2771]: W1024 12:59:03.221574 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.221705 kubelet[2771]: E1024 12:59:03.221583 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.222382 kubelet[2771]: E1024 12:59:03.222358 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.222382 kubelet[2771]: W1024 12:59:03.222376 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.222465 kubelet[2771]: E1024 12:59:03.222390 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.222621 kubelet[2771]: E1024 12:59:03.222554 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.222621 kubelet[2771]: W1024 12:59:03.222570 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.222621 kubelet[2771]: E1024 12:59:03.222578 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.222939 kubelet[2771]: E1024 12:59:03.222776 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.222939 kubelet[2771]: W1024 12:59:03.222790 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.222939 kubelet[2771]: E1024 12:59:03.222798 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.223175 kubelet[2771]: E1024 12:59:03.222960 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.223175 kubelet[2771]: W1024 12:59:03.222968 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.223175 kubelet[2771]: E1024 12:59:03.222979 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.223175 kubelet[2771]: E1024 12:59:03.223153 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.223175 kubelet[2771]: W1024 12:59:03.223161 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.223175 kubelet[2771]: E1024 12:59:03.223172 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.226354 kubelet[2771]: E1024 12:59:03.226204 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.226354 kubelet[2771]: W1024 12:59:03.226238 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.226354 kubelet[2771]: E1024 12:59:03.226272 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.227135 kubelet[2771]: E1024 12:59:03.227121 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.227514 kubelet[2771]: W1024 12:59:03.227195 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.227514 kubelet[2771]: E1024 12:59:03.227210 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.228379 kubelet[2771]: E1024 12:59:03.228251 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.228379 kubelet[2771]: W1024 12:59:03.228265 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.228379 kubelet[2771]: E1024 12:59:03.228275 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.229779 kubelet[2771]: E1024 12:59:03.229745 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.229900 kubelet[2771]: W1024 12:59:03.229879 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.229937 kubelet[2771]: E1024 12:59:03.229906 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.230811 kubelet[2771]: E1024 12:59:03.230755 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.230811 kubelet[2771]: W1024 12:59:03.230807 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.230899 kubelet[2771]: E1024 12:59:03.230818 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.231407 kubelet[2771]: E1024 12:59:03.231383 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.231407 kubelet[2771]: W1024 12:59:03.231396 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.231407 kubelet[2771]: E1024 12:59:03.231406 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.239323 kubelet[2771]: E1024 12:59:03.239298 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.239323 kubelet[2771]: W1024 12:59:03.239314 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.239323 kubelet[2771]: E1024 12:59:03.239325 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.239737 kubelet[2771]: E1024 12:59:03.239721 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.239737 kubelet[2771]: W1024 12:59:03.239733 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.239801 kubelet[2771]: E1024 12:59:03.239746 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.239997 kubelet[2771]: E1024 12:59:03.239980 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.239997 kubelet[2771]: W1024 12:59:03.239991 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.240054 kubelet[2771]: E1024 12:59:03.240005 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.240417 kubelet[2771]: E1024 12:59:03.240367 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.240417 kubelet[2771]: W1024 12:59:03.240404 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.240510 kubelet[2771]: E1024 12:59:03.240435 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.240775 kubelet[2771]: E1024 12:59:03.240621 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.240775 kubelet[2771]: W1024 12:59:03.240631 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.240775 kubelet[2771]: E1024 12:59:03.240649 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.240989 kubelet[2771]: E1024 12:59:03.240876 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.240989 kubelet[2771]: W1024 12:59:03.240887 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.240989 kubelet[2771]: E1024 12:59:03.240902 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.241102 kubelet[2771]: E1024 12:59:03.241085 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.241102 kubelet[2771]: W1024 12:59:03.241096 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.241165 kubelet[2771]: E1024 12:59:03.241108 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.241298 kubelet[2771]: E1024 12:59:03.241282 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.241298 kubelet[2771]: W1024 12:59:03.241292 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.241463 kubelet[2771]: E1024 12:59:03.241371 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.241463 kubelet[2771]: E1024 12:59:03.241481 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.241543 kubelet[2771]: W1024 12:59:03.241490 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.241543 kubelet[2771]: E1024 12:59:03.241528 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.241694 kubelet[2771]: E1024 12:59:03.241678 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.241694 kubelet[2771]: W1024 12:59:03.241690 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.241760 kubelet[2771]: E1024 12:59:03.241703 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.241918 kubelet[2771]: E1024 12:59:03.241901 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.241918 kubelet[2771]: W1024 12:59:03.241912 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.241988 kubelet[2771]: E1024 12:59:03.241927 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.242300 kubelet[2771]: E1024 12:59:03.242259 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.242342 kubelet[2771]: W1024 12:59:03.242299 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.242387 kubelet[2771]: E1024 12:59:03.242359 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.242664 kubelet[2771]: E1024 12:59:03.242647 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.242664 kubelet[2771]: W1024 12:59:03.242659 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.242732 kubelet[2771]: E1024 12:59:03.242676 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.242907 kubelet[2771]: E1024 12:59:03.242890 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.242907 kubelet[2771]: W1024 12:59:03.242901 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.242964 kubelet[2771]: E1024 12:59:03.242915 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.243120 kubelet[2771]: E1024 12:59:03.243103 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.243120 kubelet[2771]: W1024 12:59:03.243113 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.243207 kubelet[2771]: E1024 12:59:03.243126 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.243525 kubelet[2771]: E1024 12:59:03.243504 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.243525 kubelet[2771]: W1024 12:59:03.243519 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.243648 kubelet[2771]: E1024 12:59:03.243536 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.243822 kubelet[2771]: E1024 12:59:03.243803 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.243822 kubelet[2771]: W1024 12:59:03.243815 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.243898 kubelet[2771]: E1024 12:59:03.243831 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.244059 kubelet[2771]: E1024 12:59:03.244043 2771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 24 12:59:03.244059 kubelet[2771]: W1024 12:59:03.244054 2771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 24 12:59:03.244115 kubelet[2771]: E1024 12:59:03.244062 2771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 24 12:59:03.245781 containerd[1596]: time="2025-10-24T12:59:03.245730159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:03.248616 containerd[1596]: time="2025-10-24T12:59:03.246625188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 24 12:59:03.248616 containerd[1596]: time="2025-10-24T12:59:03.248192308Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:03.250853 containerd[1596]: time="2025-10-24T12:59:03.250818934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:03.251560 containerd[1596]: time="2025-10-24T12:59:03.251492487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.41152551s" Oct 24 12:59:03.251560 containerd[1596]: time="2025-10-24T12:59:03.251539856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 24 12:59:03.253818 containerd[1596]: time="2025-10-24T12:59:03.253788824Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 24 12:59:03.261853 containerd[1596]: time="2025-10-24T12:59:03.261815689Z" level=info msg="Container 9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:03.268827 containerd[1596]: time="2025-10-24T12:59:03.268781624Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\"" Oct 24 12:59:03.269483 containerd[1596]: time="2025-10-24T12:59:03.269394884Z" level=info msg="StartContainer for \"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\"" Oct 24 12:59:03.270846 containerd[1596]: time="2025-10-24T12:59:03.270811451Z" level=info msg="connecting to shim 9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8" address="unix:///run/containerd/s/a58ea5c8a1520a48b87acd3f907f2f2b2fc8a7924855c543fdd1167c2c535c00" protocol=ttrpc version=3 Oct 24 12:59:03.293769 systemd[1]: Started cri-containerd-9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8.scope - libcontainer container 9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8. Oct 24 12:59:03.341712 containerd[1596]: time="2025-10-24T12:59:03.341569479Z" level=info msg="StartContainer for \"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\" returns successfully" Oct 24 12:59:03.354120 systemd[1]: cri-containerd-9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8.scope: Deactivated successfully. Oct 24 12:59:03.357058 containerd[1596]: time="2025-10-24T12:59:03.357013895Z" level=info msg="received exit event container_id:\"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\" id:\"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\" pid:3512 exited_at:{seconds:1761310743 nanos:356438847}" Oct 24 12:59:03.357202 containerd[1596]: time="2025-10-24T12:59:03.357173795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\" id:\"9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8\" pid:3512 exited_at:{seconds:1761310743 nanos:356438847}" Oct 24 12:59:03.381467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fc570f523541d61fe1e44eb1c44318fdaf3b48dedf22c8d2f5d0cbdc916cfc8-rootfs.mount: Deactivated successfully. Oct 24 12:59:04.139120 kubelet[2771]: E1024 12:59:04.139079 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:04.139947 containerd[1596]: time="2025-10-24T12:59:04.139890466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 24 12:59:05.077021 kubelet[2771]: E1024 12:59:05.076958 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:07.077117 kubelet[2771]: E1024 12:59:07.077056 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:07.626097 containerd[1596]: time="2025-10-24T12:59:07.626041684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:07.752479 containerd[1596]: time="2025-10-24T12:59:07.752403133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 24 12:59:07.800477 containerd[1596]: time="2025-10-24T12:59:07.800411949Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:07.803337 containerd[1596]: time="2025-10-24T12:59:07.803276642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:07.804419 containerd[1596]: time="2025-10-24T12:59:07.804370733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.664441675s" Oct 24 12:59:07.804419 containerd[1596]: time="2025-10-24T12:59:07.804412371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 24 12:59:07.807089 containerd[1596]: time="2025-10-24T12:59:07.807064596Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 24 12:59:07.816238 containerd[1596]: time="2025-10-24T12:59:07.816195790Z" level=info msg="Container bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:07.825287 containerd[1596]: time="2025-10-24T12:59:07.825251684Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\"" Oct 24 12:59:07.825948 containerd[1596]: time="2025-10-24T12:59:07.825903817Z" level=info msg="StartContainer for \"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\"" Oct 24 12:59:07.828374 containerd[1596]: time="2025-10-24T12:59:07.828315720Z" level=info msg="connecting to shim bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b" address="unix:///run/containerd/s/a58ea5c8a1520a48b87acd3f907f2f2b2fc8a7924855c543fdd1167c2c535c00" protocol=ttrpc version=3 Oct 24 12:59:07.846733 systemd[1]: Started cri-containerd-bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b.scope - libcontainer container bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b. Oct 24 12:59:07.888521 containerd[1596]: time="2025-10-24T12:59:07.888335873Z" level=info msg="StartContainer for \"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\" returns successfully" Oct 24 12:59:08.149322 kubelet[2771]: E1024 12:59:08.149164 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:09.076953 kubelet[2771]: E1024 12:59:09.076895 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:09.151059 kubelet[2771]: E1024 12:59:09.151021 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:09.161534 kubelet[2771]: I1024 12:59:09.161505 2771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 24 12:59:09.161890 kubelet[2771]: E1024 12:59:09.161853 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:09.905678 systemd[1]: cri-containerd-bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b.scope: Deactivated successfully. Oct 24 12:59:09.906036 systemd[1]: cri-containerd-bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b.scope: Consumed 624ms CPU time, 180M memory peak, 4.1M read from disk, 171.3M written to disk. Oct 24 12:59:09.908012 containerd[1596]: time="2025-10-24T12:59:09.907950861Z" level=info msg="received exit event container_id:\"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\" id:\"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\" pid:3574 exited_at:{seconds:1761310749 nanos:907755895}" Oct 24 12:59:09.908506 containerd[1596]: time="2025-10-24T12:59:09.908166686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\" id:\"bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b\" pid:3574 exited_at:{seconds:1761310749 nanos:907755895}" Oct 24 12:59:09.931413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf7329f36e004eac5b906e56ceeaa1dd53e74af8aeadb7dffd759ce7d2f8b15b-rootfs.mount: Deactivated successfully. Oct 24 12:59:09.962869 kubelet[2771]: I1024 12:59:09.962822 2771 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 24 12:59:10.152194 kubelet[2771]: E1024 12:59:10.152147 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:10.282266 systemd[1]: Created slice kubepods-besteffort-podf677db58_877c_40d7_99b3_8d17842e90a1.slice - libcontainer container kubepods-besteffort-podf677db58_877c_40d7_99b3_8d17842e90a1.slice. Oct 24 12:59:10.288250 kubelet[2771]: I1024 12:59:10.288211 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jg6m\" (UniqueName: \"kubernetes.io/projected/a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27-kube-api-access-5jg6m\") pod \"coredns-668d6bf9bc-8rfp9\" (UID: \"a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27\") " pod="kube-system/coredns-668d6bf9bc-8rfp9" Oct 24 12:59:10.288250 kubelet[2771]: I1024 12:59:10.288246 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0234b6fc-2fab-4065-b52e-c071b4ca25f6-goldmane-key-pair\") pod \"goldmane-666569f655-mbvsm\" (UID: \"0234b6fc-2fab-4065-b52e-c071b4ca25f6\") " pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.288414 kubelet[2771]: I1024 12:59:10.288264 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2c88598-5e4d-4913-b78b-8bca3cd87835-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf94597f6-kjgvp\" (UID: \"c2c88598-5e4d-4913-b78b-8bca3cd87835\") " pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" Oct 24 12:59:10.288414 kubelet[2771]: I1024 12:59:10.288281 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1df758c2-0a15-4c25-ae50-92f07922f451-calico-apiserver-certs\") pod \"calico-apiserver-5f84755f96-kq7xf\" (UID: \"1df758c2-0a15-4c25-ae50-92f07922f451\") " pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" Oct 24 12:59:10.288414 kubelet[2771]: I1024 12:59:10.288298 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-ca-bundle\") pod \"whisker-799c8fdf76-p9hnp\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " pod="calico-system/whisker-799c8fdf76-p9hnp" Oct 24 12:59:10.288414 kubelet[2771]: I1024 12:59:10.288311 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q496v\" (UniqueName: \"kubernetes.io/projected/f677db58-877c-40d7-99b3-8d17842e90a1-kube-api-access-q496v\") pod \"whisker-799c8fdf76-p9hnp\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " pod="calico-system/whisker-799c8fdf76-p9hnp" Oct 24 12:59:10.288414 kubelet[2771]: I1024 12:59:10.288328 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-backend-key-pair\") pod \"whisker-799c8fdf76-p9hnp\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " pod="calico-system/whisker-799c8fdf76-p9hnp" Oct 24 12:59:10.288534 kubelet[2771]: I1024 12:59:10.288344 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/172e0745-5548-410c-a502-38447aec237c-config-volume\") pod \"coredns-668d6bf9bc-sc6tk\" (UID: \"172e0745-5548-410c-a502-38447aec237c\") " pod="kube-system/coredns-668d6bf9bc-sc6tk" Oct 24 12:59:10.288534 kubelet[2771]: I1024 12:59:10.288360 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vck\" (UniqueName: \"kubernetes.io/projected/172e0745-5548-410c-a502-38447aec237c-kube-api-access-q2vck\") pod \"coredns-668d6bf9bc-sc6tk\" (UID: \"172e0745-5548-410c-a502-38447aec237c\") " pod="kube-system/coredns-668d6bf9bc-sc6tk" Oct 24 12:59:10.288534 kubelet[2771]: I1024 12:59:10.288377 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3fbadc6-9842-46a9-bf09-28743c711629-calico-apiserver-certs\") pod \"calico-apiserver-5f84755f96-zdlgj\" (UID: \"e3fbadc6-9842-46a9-bf09-28743c711629\") " pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" Oct 24 12:59:10.288534 kubelet[2771]: I1024 12:59:10.288394 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27-config-volume\") pod \"coredns-668d6bf9bc-8rfp9\" (UID: \"a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27\") " pod="kube-system/coredns-668d6bf9bc-8rfp9" Oct 24 12:59:10.288534 kubelet[2771]: I1024 12:59:10.288410 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0234b6fc-2fab-4065-b52e-c071b4ca25f6-config\") pod \"goldmane-666569f655-mbvsm\" (UID: \"0234b6fc-2fab-4065-b52e-c071b4ca25f6\") " pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.288675 kubelet[2771]: I1024 12:59:10.288424 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0234b6fc-2fab-4065-b52e-c071b4ca25f6-goldmane-ca-bundle\") pod \"goldmane-666569f655-mbvsm\" (UID: \"0234b6fc-2fab-4065-b52e-c071b4ca25f6\") " pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.288675 kubelet[2771]: I1024 12:59:10.288457 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqthm\" (UniqueName: \"kubernetes.io/projected/e3fbadc6-9842-46a9-bf09-28743c711629-kube-api-access-lqthm\") pod \"calico-apiserver-5f84755f96-zdlgj\" (UID: \"e3fbadc6-9842-46a9-bf09-28743c711629\") " pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" Oct 24 12:59:10.288675 kubelet[2771]: I1024 12:59:10.288475 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckmb\" (UniqueName: \"kubernetes.io/projected/0234b6fc-2fab-4065-b52e-c071b4ca25f6-kube-api-access-gckmb\") pod \"goldmane-666569f655-mbvsm\" (UID: \"0234b6fc-2fab-4065-b52e-c071b4ca25f6\") " pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.288675 kubelet[2771]: I1024 12:59:10.288494 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjtr\" (UniqueName: \"kubernetes.io/projected/c2c88598-5e4d-4913-b78b-8bca3cd87835-kube-api-access-bkjtr\") pod \"calico-kube-controllers-7cf94597f6-kjgvp\" (UID: \"c2c88598-5e4d-4913-b78b-8bca3cd87835\") " pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" Oct 24 12:59:10.288675 kubelet[2771]: I1024 12:59:10.288510 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkwd\" (UniqueName: \"kubernetes.io/projected/1df758c2-0a15-4c25-ae50-92f07922f451-kube-api-access-vzkwd\") pod \"calico-apiserver-5f84755f96-kq7xf\" (UID: \"1df758c2-0a15-4c25-ae50-92f07922f451\") " pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" Oct 24 12:59:10.289795 systemd[1]: Created slice kubepods-burstable-pod172e0745_5548_410c_a502_38447aec237c.slice - libcontainer container kubepods-burstable-pod172e0745_5548_410c_a502_38447aec237c.slice. Oct 24 12:59:10.296486 systemd[1]: Created slice kubepods-burstable-poda76d5dfa_5b19_40ce_bd28_2e0cacf0ca27.slice - libcontainer container kubepods-burstable-poda76d5dfa_5b19_40ce_bd28_2e0cacf0ca27.slice. Oct 24 12:59:10.302317 systemd[1]: Created slice kubepods-besteffort-pode3fbadc6_9842_46a9_bf09_28743c711629.slice - libcontainer container kubepods-besteffort-pode3fbadc6_9842_46a9_bf09_28743c711629.slice. Oct 24 12:59:10.307274 systemd[1]: Created slice kubepods-besteffort-pod0234b6fc_2fab_4065_b52e_c071b4ca25f6.slice - libcontainer container kubepods-besteffort-pod0234b6fc_2fab_4065_b52e_c071b4ca25f6.slice. Oct 24 12:59:10.311337 systemd[1]: Created slice kubepods-besteffort-pod1df758c2_0a15_4c25_ae50_92f07922f451.slice - libcontainer container kubepods-besteffort-pod1df758c2_0a15_4c25_ae50_92f07922f451.slice. Oct 24 12:59:10.316026 systemd[1]: Created slice kubepods-besteffort-podc2c88598_5e4d_4913_b78b_8bca3cd87835.slice - libcontainer container kubepods-besteffort-podc2c88598_5e4d_4913_b78b_8bca3cd87835.slice. Oct 24 12:59:10.588827 containerd[1596]: time="2025-10-24T12:59:10.588786828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799c8fdf76-p9hnp,Uid:f677db58-877c-40d7-99b3-8d17842e90a1,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:10.593888 kubelet[2771]: E1024 12:59:10.593815 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:10.596034 containerd[1596]: time="2025-10-24T12:59:10.595968836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc6tk,Uid:172e0745-5548-410c-a502-38447aec237c,Namespace:kube-system,Attempt:0,}" Oct 24 12:59:10.599717 kubelet[2771]: E1024 12:59:10.599669 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:10.600893 containerd[1596]: time="2025-10-24T12:59:10.600844951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rfp9,Uid:a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27,Namespace:kube-system,Attempt:0,}" Oct 24 12:59:10.606271 containerd[1596]: time="2025-10-24T12:59:10.606225541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-zdlgj,Uid:e3fbadc6-9842-46a9-bf09-28743c711629,Namespace:calico-apiserver,Attempt:0,}" Oct 24 12:59:10.611984 containerd[1596]: time="2025-10-24T12:59:10.611856111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbvsm,Uid:0234b6fc-2fab-4065-b52e-c071b4ca25f6,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:10.614883 containerd[1596]: time="2025-10-24T12:59:10.614812425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-kq7xf,Uid:1df758c2-0a15-4c25-ae50-92f07922f451,Namespace:calico-apiserver,Attempt:0,}" Oct 24 12:59:10.619817 containerd[1596]: time="2025-10-24T12:59:10.619579444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf94597f6-kjgvp,Uid:c2c88598-5e4d-4913-b78b-8bca3cd87835,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:10.736616 containerd[1596]: time="2025-10-24T12:59:10.736535426Z" level=error msg="Failed to destroy network for sandbox \"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.736788 containerd[1596]: time="2025-10-24T12:59:10.736560954Z" level=error msg="Failed to destroy network for sandbox \"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.737741 containerd[1596]: time="2025-10-24T12:59:10.737559506Z" level=error msg="Failed to destroy network for sandbox \"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.738284 containerd[1596]: time="2025-10-24T12:59:10.738262094Z" level=error msg="Failed to destroy network for sandbox \"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.738962 containerd[1596]: time="2025-10-24T12:59:10.738829538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-kq7xf,Uid:1df758c2-0a15-4c25-ae50-92f07922f451,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.739578 kubelet[2771]: E1024 12:59:10.739497 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.739795 kubelet[2771]: E1024 12:59:10.739770 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" Oct 24 12:59:10.740975 kubelet[2771]: E1024 12:59:10.739898 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" Oct 24 12:59:10.740975 kubelet[2771]: E1024 12:59:10.739973 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f84755f96-kq7xf_calico-apiserver(1df758c2-0a15-4c25-ae50-92f07922f451)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f84755f96-kq7xf_calico-apiserver(1df758c2-0a15-4c25-ae50-92f07922f451)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46fa91add97433f2c843a0bd040aede5ccd682428906809bec9b64e593c52b2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" podUID="1df758c2-0a15-4c25-ae50-92f07922f451" Oct 24 12:59:10.743335 containerd[1596]: time="2025-10-24T12:59:10.743293680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rfp9,Uid:a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.743726 kubelet[2771]: E1024 12:59:10.743632 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.743726 kubelet[2771]: E1024 12:59:10.743682 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8rfp9" Oct 24 12:59:10.743726 kubelet[2771]: E1024 12:59:10.743711 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8rfp9" Oct 24 12:59:10.743978 kubelet[2771]: E1024 12:59:10.743750 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8rfp9_kube-system(a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8rfp9_kube-system(a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"184ba93fa9649ba412b091e1a7679e8cf3d37abbf27d0f692217715afb5a2a6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8rfp9" podUID="a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27" Oct 24 12:59:10.745230 containerd[1596]: time="2025-10-24T12:59:10.745111128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-zdlgj,Uid:e3fbadc6-9842-46a9-bf09-28743c711629,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.746490 containerd[1596]: time="2025-10-24T12:59:10.746465207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-799c8fdf76-p9hnp,Uid:f677db58-877c-40d7-99b3-8d17842e90a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.747132 kubelet[2771]: E1024 12:59:10.746904 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.747132 kubelet[2771]: E1024 12:59:10.747006 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-799c8fdf76-p9hnp" Oct 24 12:59:10.747132 kubelet[2771]: E1024 12:59:10.747056 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-799c8fdf76-p9hnp" Oct 24 12:59:10.747245 kubelet[2771]: E1024 12:59:10.747221 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.747275 kubelet[2771]: E1024 12:59:10.747263 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" Oct 24 12:59:10.747305 kubelet[2771]: E1024 12:59:10.747279 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" Oct 24 12:59:10.747362 kubelet[2771]: E1024 12:59:10.747319 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f84755f96-zdlgj_calico-apiserver(e3fbadc6-9842-46a9-bf09-28743c711629)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f84755f96-zdlgj_calico-apiserver(e3fbadc6-9842-46a9-bf09-28743c711629)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b029a496c31a5fcafef3ff02b9c9141b6e514fc4bb2293140a9f2982c7888a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:10.747362 kubelet[2771]: E1024 12:59:10.747348 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-799c8fdf76-p9hnp_calico-system(f677db58-877c-40d7-99b3-8d17842e90a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-799c8fdf76-p9hnp_calico-system(f677db58-877c-40d7-99b3-8d17842e90a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e5605017549144966267213e05b75d2ec5b178e9497eccf7ea6d4806ef4b95a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-799c8fdf76-p9hnp" podUID="f677db58-877c-40d7-99b3-8d17842e90a1" Oct 24 12:59:10.748669 containerd[1596]: time="2025-10-24T12:59:10.748630138Z" level=error msg="Failed to destroy network for sandbox \"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.750006 containerd[1596]: time="2025-10-24T12:59:10.749968738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc6tk,Uid:172e0745-5548-410c-a502-38447aec237c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.750280 kubelet[2771]: E1024 12:59:10.750238 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.750340 kubelet[2771]: E1024 12:59:10.750291 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sc6tk" Oct 24 12:59:10.750340 kubelet[2771]: E1024 12:59:10.750309 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sc6tk" Oct 24 12:59:10.750396 kubelet[2771]: E1024 12:59:10.750345 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sc6tk_kube-system(172e0745-5548-410c-a502-38447aec237c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sc6tk_kube-system(172e0745-5548-410c-a502-38447aec237c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c050a06a0dcab4b52c6b60326b1a7b8dbcb4f953ea46072f98cca76151461db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sc6tk" podUID="172e0745-5548-410c-a502-38447aec237c" Oct 24 12:59:10.753387 containerd[1596]: time="2025-10-24T12:59:10.753330983Z" level=error msg="Failed to destroy network for sandbox \"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.754899 containerd[1596]: time="2025-10-24T12:59:10.754827881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbvsm,Uid:0234b6fc-2fab-4065-b52e-c071b4ca25f6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.755125 kubelet[2771]: E1024 12:59:10.755090 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.755167 kubelet[2771]: E1024 12:59:10.755151 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.755194 kubelet[2771]: E1024 12:59:10.755168 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mbvsm" Oct 24 12:59:10.755223 kubelet[2771]: E1024 12:59:10.755206 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mbvsm_calico-system(0234b6fc-2fab-4065-b52e-c071b4ca25f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mbvsm_calico-system(0234b6fc-2fab-4065-b52e-c071b4ca25f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd3a5428dc98a174d25786c4058e2e72bd922e77f106312828dde7c005e355da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:10.764573 containerd[1596]: time="2025-10-24T12:59:10.764516490Z" level=error msg="Failed to destroy network for sandbox \"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.766005 containerd[1596]: time="2025-10-24T12:59:10.765924810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf94597f6-kjgvp,Uid:c2c88598-5e4d-4913-b78b-8bca3cd87835,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.766253 kubelet[2771]: E1024 12:59:10.766217 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:10.766320 kubelet[2771]: E1024 12:59:10.766275 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" Oct 24 12:59:10.766320 kubelet[2771]: E1024 12:59:10.766296 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" Oct 24 12:59:10.766400 kubelet[2771]: E1024 12:59:10.766348 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf94597f6-kjgvp_calico-system(c2c88598-5e4d-4913-b78b-8bca3cd87835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf94597f6-kjgvp_calico-system(c2c88598-5e4d-4913-b78b-8bca3cd87835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ad910afcee8348181bafaa7744a42a4fc7a78922da3f3aafeadacca34667b15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:11.084759 systemd[1]: Created slice kubepods-besteffort-poda5ee21c4_521a_4300_831d_bec9b2d7f45e.slice - libcontainer container kubepods-besteffort-poda5ee21c4_521a_4300_831d_bec9b2d7f45e.slice. Oct 24 12:59:11.088096 containerd[1596]: time="2025-10-24T12:59:11.088052486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2tsg,Uid:a5ee21c4-521a-4300-831d-bec9b2d7f45e,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:11.152811 containerd[1596]: time="2025-10-24T12:59:11.152726911Z" level=error msg="Failed to destroy network for sandbox \"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:11.154446 containerd[1596]: time="2025-10-24T12:59:11.154369390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2tsg,Uid:a5ee21c4-521a-4300-831d-bec9b2d7f45e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:11.156140 kubelet[2771]: E1024 12:59:11.155933 2771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 24 12:59:11.156140 kubelet[2771]: E1024 12:59:11.155985 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:59:11.156140 kubelet[2771]: E1024 12:59:11.156004 2771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2tsg" Oct 24 12:59:11.158135 kubelet[2771]: E1024 12:59:11.156044 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"934f15707dfba2d8242dd244def2941e73e329c1d61d5b6dcf06d567033224dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:11.157582 systemd[1]: run-netns-cni\x2d2257b38c\x2d54fc\x2df086\x2d1ed6\x2d6879a8734cfc.mount: Deactivated successfully. Oct 24 12:59:11.161952 kubelet[2771]: E1024 12:59:11.161899 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:11.162828 containerd[1596]: time="2025-10-24T12:59:11.162786496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 24 12:59:18.207039 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:59996.service - OpenSSH per-connection server daemon (10.0.0.1:59996). Oct 24 12:59:18.284654 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 59996 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:18.286681 sshd-session[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:18.292899 systemd-logind[1573]: New session 10 of user core. Oct 24 12:59:18.302772 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 24 12:59:18.442624 sshd[3888]: Connection closed by 10.0.0.1 port 59996 Oct 24 12:59:18.441453 sshd-session[3885]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:18.447290 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:59996.service: Deactivated successfully. Oct 24 12:59:18.450059 systemd[1]: session-10.scope: Deactivated successfully. Oct 24 12:59:18.451047 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Oct 24 12:59:18.452366 systemd-logind[1573]: Removed session 10. Oct 24 12:59:19.405453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458881547.mount: Deactivated successfully. Oct 24 12:59:20.381512 containerd[1596]: time="2025-10-24T12:59:20.381445419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:20.382282 containerd[1596]: time="2025-10-24T12:59:20.382256721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 24 12:59:20.383501 containerd[1596]: time="2025-10-24T12:59:20.383439428Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:20.385341 containerd[1596]: time="2025-10-24T12:59:20.385307110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 24 12:59:20.386017 containerd[1596]: time="2025-10-24T12:59:20.385973020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.223143774s" Oct 24 12:59:20.386017 containerd[1596]: time="2025-10-24T12:59:20.386010640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 24 12:59:20.397263 containerd[1596]: time="2025-10-24T12:59:20.397212816Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 24 12:59:20.414345 containerd[1596]: time="2025-10-24T12:59:20.414302252Z" level=info msg="Container 1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:20.424489 containerd[1596]: time="2025-10-24T12:59:20.424450421Z" level=info msg="CreateContainer within sandbox \"3254790de9ceed68e48c5e82b8c69d02a79d8f5424bfb67aca9573b391f47b4e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24\"" Oct 24 12:59:20.424977 containerd[1596]: time="2025-10-24T12:59:20.424941962Z" level=info msg="StartContainer for \"1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24\"" Oct 24 12:59:20.426388 containerd[1596]: time="2025-10-24T12:59:20.426361515Z" level=info msg="connecting to shim 1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24" address="unix:///run/containerd/s/a58ea5c8a1520a48b87acd3f907f2f2b2fc8a7924855c543fdd1167c2c535c00" protocol=ttrpc version=3 Oct 24 12:59:20.449755 systemd[1]: Started cri-containerd-1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24.scope - libcontainer container 1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24. Oct 24 12:59:20.506178 containerd[1596]: time="2025-10-24T12:59:20.506113173Z" level=info msg="StartContainer for \"1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24\" returns successfully" Oct 24 12:59:20.575444 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 24 12:59:20.575581 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 24 12:59:20.753326 kubelet[2771]: I1024 12:59:20.753178 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q496v\" (UniqueName: \"kubernetes.io/projected/f677db58-877c-40d7-99b3-8d17842e90a1-kube-api-access-q496v\") pod \"f677db58-877c-40d7-99b3-8d17842e90a1\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " Oct 24 12:59:20.753326 kubelet[2771]: I1024 12:59:20.753230 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-backend-key-pair\") pod \"f677db58-877c-40d7-99b3-8d17842e90a1\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " Oct 24 12:59:20.753326 kubelet[2771]: I1024 12:59:20.753256 2771 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-ca-bundle\") pod \"f677db58-877c-40d7-99b3-8d17842e90a1\" (UID: \"f677db58-877c-40d7-99b3-8d17842e90a1\") " Oct 24 12:59:20.754382 kubelet[2771]: I1024 12:59:20.754356 2771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f677db58-877c-40d7-99b3-8d17842e90a1" (UID: "f677db58-877c-40d7-99b3-8d17842e90a1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 24 12:59:20.758670 kubelet[2771]: I1024 12:59:20.758635 2771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f677db58-877c-40d7-99b3-8d17842e90a1-kube-api-access-q496v" (OuterVolumeSpecName: "kube-api-access-q496v") pod "f677db58-877c-40d7-99b3-8d17842e90a1" (UID: "f677db58-877c-40d7-99b3-8d17842e90a1"). InnerVolumeSpecName "kube-api-access-q496v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 24 12:59:20.762331 systemd[1]: var-lib-kubelet-pods-f677db58\x2d877c\x2d40d7\x2d99b3\x2d8d17842e90a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq496v.mount: Deactivated successfully. Oct 24 12:59:20.762459 systemd[1]: var-lib-kubelet-pods-f677db58\x2d877c\x2d40d7\x2d99b3\x2d8d17842e90a1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 24 12:59:20.763474 kubelet[2771]: I1024 12:59:20.763437 2771 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f677db58-877c-40d7-99b3-8d17842e90a1" (UID: "f677db58-877c-40d7-99b3-8d17842e90a1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 24 12:59:20.854269 kubelet[2771]: I1024 12:59:20.854209 2771 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q496v\" (UniqueName: \"kubernetes.io/projected/f677db58-877c-40d7-99b3-8d17842e90a1-kube-api-access-q496v\") on node \"localhost\" DevicePath \"\"" Oct 24 12:59:20.854269 kubelet[2771]: I1024 12:59:20.854251 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 24 12:59:20.854269 kubelet[2771]: I1024 12:59:20.854261 2771 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f677db58-877c-40d7-99b3-8d17842e90a1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 24 12:59:21.078012 containerd[1596]: time="2025-10-24T12:59:21.077894954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf94597f6-kjgvp,Uid:c2c88598-5e4d-4913-b78b-8bca3cd87835,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:21.192625 kubelet[2771]: E1024 12:59:21.191856 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:21.196143 systemd[1]: Removed slice kubepods-besteffort-podf677db58_877c_40d7_99b3_8d17842e90a1.slice - libcontainer container kubepods-besteffort-podf677db58_877c_40d7_99b3_8d17842e90a1.slice. Oct 24 12:59:21.206485 kubelet[2771]: I1024 12:59:21.206415 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4fs67" podStartSLOduration=1.7998273729999998 podStartE2EDuration="22.206387148s" podCreationTimestamp="2025-10-24 12:58:59 +0000 UTC" firstStartedPulling="2025-10-24 12:58:59.980196904 +0000 UTC m=+20.007985092" lastFinishedPulling="2025-10-24 12:59:20.386756679 +0000 UTC m=+40.414544867" observedRunningTime="2025-10-24 12:59:21.206060736 +0000 UTC m=+41.233848924" watchObservedRunningTime="2025-10-24 12:59:21.206387148 +0000 UTC m=+41.234175326" Oct 24 12:59:21.235649 systemd-networkd[1494]: calie4634e4a62f: Link UP Oct 24 12:59:21.236300 systemd-networkd[1494]: calie4634e4a62f: Gained carrier Oct 24 12:59:21.260626 kubelet[2771]: I1024 12:59:21.260276 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/84b48c97-55ac-463f-8b03-ea5f0339b62a-whisker-backend-key-pair\") pod \"whisker-548f66845d-v54tb\" (UID: \"84b48c97-55ac-463f-8b03-ea5f0339b62a\") " pod="calico-system/whisker-548f66845d-v54tb" Oct 24 12:59:21.260626 kubelet[2771]: I1024 12:59:21.260331 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jn4g\" (UniqueName: \"kubernetes.io/projected/84b48c97-55ac-463f-8b03-ea5f0339b62a-kube-api-access-6jn4g\") pod \"whisker-548f66845d-v54tb\" (UID: \"84b48c97-55ac-463f-8b03-ea5f0339b62a\") " pod="calico-system/whisker-548f66845d-v54tb" Oct 24 12:59:21.260626 kubelet[2771]: I1024 12:59:21.260370 2771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84b48c97-55ac-463f-8b03-ea5f0339b62a-whisker-ca-bundle\") pod \"whisker-548f66845d-v54tb\" (UID: \"84b48c97-55ac-463f-8b03-ea5f0339b62a\") " pod="calico-system/whisker-548f66845d-v54tb" Oct 24 12:59:21.260937 containerd[1596]: 2025-10-24 12:59:21.100 [INFO][3967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 24 12:59:21.260937 containerd[1596]: 2025-10-24 12:59:21.115 [INFO][3967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0 calico-kube-controllers-7cf94597f6- calico-system c2c88598-5e4d-4913-b78b-8bca3cd87835 881 0 2025-10-24 12:58:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf94597f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cf94597f6-kjgvp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie4634e4a62f [] [] }} ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-" Oct 24 12:59:21.260937 containerd[1596]: 2025-10-24 12:59:21.115 [INFO][3967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.260937 containerd[1596]: 2025-10-24 12:59:21.177 [INFO][3981] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" HandleID="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Workload="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.178 [INFO][3981] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" HandleID="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Workload="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000413cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cf94597f6-kjgvp", "timestamp":"2025-10-24 12:59:21.177528893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.178 [INFO][3981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.178 [INFO][3981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.178 [INFO][3981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.185 [INFO][3981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" host="localhost" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.190 [INFO][3981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.195 [INFO][3981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.201 [INFO][3981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.207 [INFO][3981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:21.261106 containerd[1596]: 2025-10-24 12:59:21.207 [INFO][3981] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" host="localhost" Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.210 [INFO][3981] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259 Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.215 [INFO][3981] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" host="localhost" Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.222 [INFO][3981] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" host="localhost" Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.222 [INFO][3981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" host="localhost" Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.223 [INFO][3981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:21.261313 containerd[1596]: 2025-10-24 12:59:21.223 [INFO][3981] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" HandleID="k8s-pod-network.9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Workload="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.261438 containerd[1596]: 2025-10-24 12:59:21.227 [INFO][3967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0", GenerateName:"calico-kube-controllers-7cf94597f6-", Namespace:"calico-system", SelfLink:"", UID:"c2c88598-5e4d-4913-b78b-8bca3cd87835", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf94597f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cf94597f6-kjgvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4634e4a62f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:21.261486 containerd[1596]: 2025-10-24 12:59:21.227 [INFO][3967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.261486 containerd[1596]: 2025-10-24 12:59:21.227 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4634e4a62f ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.261486 containerd[1596]: 2025-10-24 12:59:21.240 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.261562 containerd[1596]: 2025-10-24 12:59:21.240 [INFO][3967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0", GenerateName:"calico-kube-controllers-7cf94597f6-", Namespace:"calico-system", SelfLink:"", UID:"c2c88598-5e4d-4913-b78b-8bca3cd87835", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf94597f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259", Pod:"calico-kube-controllers-7cf94597f6-kjgvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4634e4a62f", MAC:"12:43:fc:28:8a:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:21.262070 containerd[1596]: 2025-10-24 12:59:21.255 [INFO][3967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" Namespace="calico-system" Pod="calico-kube-controllers-7cf94597f6-kjgvp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf94597f6--kjgvp-eth0" Oct 24 12:59:21.264562 systemd[1]: Created slice kubepods-besteffort-pod84b48c97_55ac_463f_8b03_ea5f0339b62a.slice - libcontainer container kubepods-besteffort-pod84b48c97_55ac_463f_8b03_ea5f0339b62a.slice. Oct 24 12:59:21.414153 containerd[1596]: time="2025-10-24T12:59:21.414079415Z" level=info msg="connecting to shim 9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259" address="unix:///run/containerd/s/8bc6362fadf35ff8b1c84b49f7455e8d36b995797d27a40ac2d96b4bb90e2fb3" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:21.444743 systemd[1]: Started cri-containerd-9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259.scope - libcontainer container 9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259. Oct 24 12:59:21.457103 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:21.490339 containerd[1596]: time="2025-10-24T12:59:21.490226123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf94597f6-kjgvp,Uid:c2c88598-5e4d-4913-b78b-8bca3cd87835,Namespace:calico-system,Attempt:0,} returns sandbox id \"9476a648cb94c6f41c4b241f1883f8a128f86998ba490d7eeb7295eb7e18b259\"" Oct 24 12:59:21.494923 containerd[1596]: time="2025-10-24T12:59:21.494886090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 24 12:59:21.568318 containerd[1596]: time="2025-10-24T12:59:21.568278063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548f66845d-v54tb,Uid:84b48c97-55ac-463f-8b03-ea5f0339b62a,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:21.660389 systemd-networkd[1494]: cali3517d84f232: Link UP Oct 24 12:59:21.661097 systemd-networkd[1494]: cali3517d84f232: Gained carrier Oct 24 12:59:21.672266 containerd[1596]: 2025-10-24 12:59:21.591 [INFO][4043] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 24 12:59:21.672266 containerd[1596]: 2025-10-24 12:59:21.601 [INFO][4043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--548f66845d--v54tb-eth0 whisker-548f66845d- calico-system 84b48c97-55ac-463f-8b03-ea5f0339b62a 1003 0 2025-10-24 12:59:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:548f66845d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-548f66845d-v54tb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3517d84f232 [] [] }} ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-" Oct 24 12:59:21.672266 containerd[1596]: 2025-10-24 12:59:21.601 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.672266 containerd[1596]: 2025-10-24 12:59:21.627 [INFO][4058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" HandleID="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Workload="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.627 [INFO][4058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" HandleID="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Workload="localhost-k8s-whisker--548f66845d--v54tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-548f66845d-v54tb", "timestamp":"2025-10-24 12:59:21.627111495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.627 [INFO][4058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.627 [INFO][4058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.627 [INFO][4058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.634 [INFO][4058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" host="localhost" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.638 [INFO][4058] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.642 [INFO][4058] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.643 [INFO][4058] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.645 [INFO][4058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:21.672467 containerd[1596]: 2025-10-24 12:59:21.645 [INFO][4058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" host="localhost" Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.646 [INFO][4058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3 Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.651 [INFO][4058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" host="localhost" Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.655 [INFO][4058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" host="localhost" Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.655 [INFO][4058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" host="localhost" Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.655 [INFO][4058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:21.672873 containerd[1596]: 2025-10-24 12:59:21.655 [INFO][4058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" HandleID="k8s-pod-network.6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Workload="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.672991 containerd[1596]: 2025-10-24 12:59:21.658 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--548f66845d--v54tb-eth0", GenerateName:"whisker-548f66845d-", Namespace:"calico-system", SelfLink:"", UID:"84b48c97-55ac-463f-8b03-ea5f0339b62a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 59, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548f66845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-548f66845d-v54tb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3517d84f232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:21.672991 containerd[1596]: 2025-10-24 12:59:21.658 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.673070 containerd[1596]: 2025-10-24 12:59:21.658 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3517d84f232 ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.673070 containerd[1596]: 2025-10-24 12:59:21.661 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.673111 containerd[1596]: 2025-10-24 12:59:21.661 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--548f66845d--v54tb-eth0", GenerateName:"whisker-548f66845d-", Namespace:"calico-system", SelfLink:"", UID:"84b48c97-55ac-463f-8b03-ea5f0339b62a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 59, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548f66845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3", Pod:"whisker-548f66845d-v54tb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3517d84f232", MAC:"72:4a:88:9a:b0:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:21.673162 containerd[1596]: 2025-10-24 12:59:21.668 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" Namespace="calico-system" Pod="whisker-548f66845d-v54tb" WorkloadEndpoint="localhost-k8s-whisker--548f66845d--v54tb-eth0" Oct 24 12:59:21.692310 containerd[1596]: time="2025-10-24T12:59:21.692255160Z" level=info msg="connecting to shim 6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3" address="unix:///run/containerd/s/67816218240cd8d3716aeee25aba89ddd1e330c983093d3b12a196934194f7c2" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:21.724722 systemd[1]: Started cri-containerd-6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3.scope - libcontainer container 6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3. Oct 24 12:59:21.736910 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:21.766276 containerd[1596]: time="2025-10-24T12:59:21.766229734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548f66845d-v54tb,Uid:84b48c97-55ac-463f-8b03-ea5f0339b62a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bb19d20b2eaf7389555bb44a9114dbe7ceca6dfaf4cf74c48ea0279f2f800c3\"" Oct 24 12:59:21.857110 containerd[1596]: time="2025-10-24T12:59:21.857063282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:21.862609 containerd[1596]: time="2025-10-24T12:59:21.859094291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 24 12:59:21.879613 containerd[1596]: time="2025-10-24T12:59:21.879548436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 24 12:59:21.880072 kubelet[2771]: E1024 12:59:21.880003 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 12:59:21.880405 kubelet[2771]: E1024 12:59:21.880090 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 12:59:21.882629 containerd[1596]: time="2025-10-24T12:59:21.881162292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 24 12:59:21.916512 kubelet[2771]: E1024 12:59:21.916382 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkjtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf94597f6-kjgvp_calico-system(c2c88598-5e4d-4913-b78b-8bca3cd87835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:21.917954 kubelet[2771]: E1024 12:59:21.917895 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:22.078536 kubelet[2771]: E1024 12:59:22.077790 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:22.078687 containerd[1596]: time="2025-10-24T12:59:22.078074644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbvsm,Uid:0234b6fc-2fab-4065-b52e-c071b4ca25f6,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:22.079313 containerd[1596]: time="2025-10-24T12:59:22.078922915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-kq7xf,Uid:1df758c2-0a15-4c25-ae50-92f07922f451,Namespace:calico-apiserver,Attempt:0,}" Oct 24 12:59:22.079313 containerd[1596]: time="2025-10-24T12:59:22.079085970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rfp9,Uid:a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27,Namespace:kube-system,Attempt:0,}" Oct 24 12:59:22.080692 kubelet[2771]: I1024 12:59:22.080637 2771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f677db58-877c-40d7-99b3-8d17842e90a1" path="/var/lib/kubelet/pods/f677db58-877c-40d7-99b3-8d17842e90a1/volumes" Oct 24 12:59:22.194831 kubelet[2771]: E1024 12:59:22.194785 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:22.297522 containerd[1596]: time="2025-10-24T12:59:22.297456369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:22.299512 containerd[1596]: time="2025-10-24T12:59:22.299199858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 24 12:59:22.299568 containerd[1596]: time="2025-10-24T12:59:22.299438415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 24 12:59:22.299856 kubelet[2771]: E1024 12:59:22.299805 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 12:59:22.299917 kubelet[2771]: E1024 12:59:22.299862 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 12:59:22.300036 kubelet[2771]: E1024 12:59:22.299985 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f0d78bab2caa44dca4b33359ea14bbbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jn4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548f66845d-v54tb_calico-system(84b48c97-55ac-463f-8b03-ea5f0339b62a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:22.303436 containerd[1596]: time="2025-10-24T12:59:22.303167919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 24 12:59:22.383584 systemd-networkd[1494]: vxlan.calico: Link UP Oct 24 12:59:22.383979 systemd-networkd[1494]: vxlan.calico: Gained carrier Oct 24 12:59:22.403088 systemd-networkd[1494]: cali5dd2c3ca93f: Link UP Oct 24 12:59:22.404917 systemd-networkd[1494]: cali5dd2c3ca93f: Gained carrier Oct 24 12:59:22.428819 containerd[1596]: 2025-10-24 12:59:22.305 [INFO][4261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0 coredns-668d6bf9bc- kube-system a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27 882 0 2025-10-24 12:58:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8rfp9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5dd2c3ca93f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-" Oct 24 12:59:22.428819 containerd[1596]: 2025-10-24 12:59:22.305 [INFO][4261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.428819 containerd[1596]: 2025-10-24 12:59:22.340 [INFO][4321] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" HandleID="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Workload="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.341 [INFO][4321] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" HandleID="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Workload="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8rfp9", "timestamp":"2025-10-24 12:59:22.340529308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.342 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.342 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.342 [INFO][4321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.356 [INFO][4321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" host="localhost" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.364 [INFO][4321] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.370 [INFO][4321] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.373 [INFO][4321] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.376 [INFO][4321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.429300 containerd[1596]: 2025-10-24 12:59:22.376 [INFO][4321] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" host="localhost" Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.377 [INFO][4321] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.382 [INFO][4321] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" host="localhost" Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.390 [INFO][4321] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" host="localhost" Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.390 [INFO][4321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" host="localhost" Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.393 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:22.429570 containerd[1596]: 2025-10-24 12:59:22.393 [INFO][4321] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" HandleID="k8s-pod-network.79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Workload="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.429760 containerd[1596]: 2025-10-24 12:59:22.400 [INFO][4261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8rfp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd2c3ca93f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.429834 containerd[1596]: 2025-10-24 12:59:22.400 [INFO][4261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.429834 containerd[1596]: 2025-10-24 12:59:22.400 [INFO][4261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dd2c3ca93f ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.429834 containerd[1596]: 2025-10-24 12:59:22.407 [INFO][4261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.429902 containerd[1596]: 2025-10-24 12:59:22.408 [INFO][4261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c", Pod:"coredns-668d6bf9bc-8rfp9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5dd2c3ca93f", MAC:"2e:fa:3b:4e:ba:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.429902 containerd[1596]: 2025-10-24 12:59:22.419 [INFO][4261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8rfp9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8rfp9-eth0" Oct 24 12:59:22.458157 containerd[1596]: time="2025-10-24T12:59:22.458064053Z" level=info msg="connecting to shim 79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c" address="unix:///run/containerd/s/f28fadfb62ab0b62a51b0452d5caa88177c6032bc3631faa27090677e1fbd7ee" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:22.491033 systemd[1]: Started cri-containerd-79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c.scope - libcontainer container 79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c. Oct 24 12:59:22.503453 systemd-networkd[1494]: calib059ba1c1a8: Link UP Oct 24 12:59:22.504757 systemd-networkd[1494]: calib059ba1c1a8: Gained carrier Oct 24 12:59:22.510984 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.273 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--mbvsm-eth0 goldmane-666569f655- calico-system 0234b6fc-2fab-4065-b52e-c071b4ca25f6 880 0 2025-10-24 12:58:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-mbvsm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib059ba1c1a8 [] [] }} ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.273 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.343 [INFO][4292] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" HandleID="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Workload="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.343 [INFO][4292] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" HandleID="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Workload="localhost-k8s-goldmane--666569f655--mbvsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-mbvsm", "timestamp":"2025-10-24 12:59:22.343264236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.343 [INFO][4292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.391 [INFO][4292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.392 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.457 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.462 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.473 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.475 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.477 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.477 [INFO][4292] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.479 [INFO][4292] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017 Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.482 [INFO][4292] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4292] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" host="localhost" Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:22.525201 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4292] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" HandleID="k8s-pod-network.951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Workload="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.496 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mbvsm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0234b6fc-2fab-4065-b52e-c071b4ca25f6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-mbvsm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib059ba1c1a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.500 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.500 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib059ba1c1a8 ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.506 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.506 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--mbvsm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0234b6fc-2fab-4065-b52e-c071b4ca25f6", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017", Pod:"goldmane-666569f655-mbvsm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib059ba1c1a8", MAC:"da:ce:02:1d:3a:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.525864 containerd[1596]: 2025-10-24 12:59:22.519 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" Namespace="calico-system" Pod="goldmane-666569f655-mbvsm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--mbvsm-eth0" Oct 24 12:59:22.557312 containerd[1596]: time="2025-10-24T12:59:22.557239883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8rfp9,Uid:a76d5dfa-5b19-40ce-bd28-2e0cacf0ca27,Namespace:kube-system,Attempt:0,} returns sandbox id \"79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c\"" Oct 24 12:59:22.558292 kubelet[2771]: E1024 12:59:22.558252 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:22.561142 containerd[1596]: time="2025-10-24T12:59:22.561092117Z" level=info msg="CreateContainer within sandbox \"79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 24 12:59:22.571389 containerd[1596]: time="2025-10-24T12:59:22.571327028Z" level=info msg="connecting to shim 951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017" address="unix:///run/containerd/s/b4ccc6e497f9d23bc2ca1ecaf8bc75cbc16636179f284a182ef85cf35157f8cb" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:22.583632 containerd[1596]: time="2025-10-24T12:59:22.583258001Z" level=info msg="Container d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:22.592056 containerd[1596]: time="2025-10-24T12:59:22.592009130Z" level=info msg="CreateContainer within sandbox \"79cb2d69dd33672d7d5a082d81ac84c4c386b5b281aed636f0507af2e7cdc52c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b\"" Oct 24 12:59:22.593225 containerd[1596]: time="2025-10-24T12:59:22.593141324Z" level=info msg="StartContainer for \"d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b\"" Oct 24 12:59:22.593950 containerd[1596]: time="2025-10-24T12:59:22.593904735Z" level=info msg="connecting to shim d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b" address="unix:///run/containerd/s/f28fadfb62ab0b62a51b0452d5caa88177c6032bc3631faa27090677e1fbd7ee" protocol=ttrpc version=3 Oct 24 12:59:22.607957 systemd[1]: Started cri-containerd-951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017.scope - libcontainer container 951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017. Oct 24 12:59:22.611982 systemd-networkd[1494]: cali4d31499828a: Link UP Oct 24 12:59:22.613744 systemd-networkd[1494]: cali4d31499828a: Gained carrier Oct 24 12:59:22.614244 containerd[1596]: time="2025-10-24T12:59:22.613968137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:22.616673 containerd[1596]: time="2025-10-24T12:59:22.616621322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 24 12:59:22.616762 containerd[1596]: time="2025-10-24T12:59:22.616707133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 24 12:59:22.617253 systemd[1]: Started cri-containerd-d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b.scope - libcontainer container d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b. Oct 24 12:59:22.617839 kubelet[2771]: E1024 12:59:22.617765 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 24 12:59:22.617839 kubelet[2771]: E1024 12:59:22.617830 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 24 12:59:22.618081 kubelet[2771]: E1024 12:59:22.617971 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jn4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548f66845d-v54tb_calico-system(84b48c97-55ac-463f-8b03-ea5f0339b62a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:22.619789 kubelet[2771]: E1024 12:59:22.619689 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548f66845d-v54tb" podUID="84b48c97-55ac-463f-8b03-ea5f0339b62a" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.283 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0 calico-apiserver-5f84755f96- calico-apiserver 1df758c2-0a15-4c25-ae50-92f07922f451 884 0 2025-10-24 12:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f84755f96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f84755f96-kq7xf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d31499828a [] [] }} ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.284 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.343 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" HandleID="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Workload="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.344 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" HandleID="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Workload="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f84755f96-kq7xf", "timestamp":"2025-10-24 12:59:22.343364934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.344 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.490 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.559 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.571 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.576 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.578 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.582 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.583 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.587 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076 Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.592 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.599 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.599 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" host="localhost" Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.599 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:22.638197 containerd[1596]: 2025-10-24 12:59:22.599 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" HandleID="k8s-pod-network.2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Workload="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.604 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0", GenerateName:"calico-apiserver-5f84755f96-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df758c2-0a15-4c25-ae50-92f07922f451", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84755f96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f84755f96-kq7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d31499828a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.604 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.604 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d31499828a ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.615 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.617 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0", GenerateName:"calico-apiserver-5f84755f96-", Namespace:"calico-apiserver", SelfLink:"", UID:"1df758c2-0a15-4c25-ae50-92f07922f451", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84755f96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076", Pod:"calico-apiserver-5f84755f96-kq7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d31499828a", MAC:"36:a1:1b:77:2e:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:22.638731 containerd[1596]: 2025-10-24 12:59:22.632 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-kq7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--kq7xf-eth0" Oct 24 12:59:22.647117 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:22.668404 containerd[1596]: time="2025-10-24T12:59:22.667899056Z" level=info msg="connecting to shim 2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076" address="unix:///run/containerd/s/a44b26ce14492f49fdfd55cbb33b4c1d5b9bdb0ebf624178358651190533d546" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:22.675379 containerd[1596]: time="2025-10-24T12:59:22.675351089Z" level=info msg="StartContainer for \"d8ed8fc2499a351ec4f6db47b85d07c23117d87e73d36938d1d0b52d9d37af7b\" returns successfully" Oct 24 12:59:22.693804 containerd[1596]: time="2025-10-24T12:59:22.693776630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mbvsm,Uid:0234b6fc-2fab-4065-b52e-c071b4ca25f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"951b21774c0fd9704fe28d9474d216aa44ca6277743ebe85d73ceaa9120bb017\"" Oct 24 12:59:22.695688 containerd[1596]: time="2025-10-24T12:59:22.695651787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 24 12:59:22.707191 systemd[1]: Started cri-containerd-2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076.scope - libcontainer container 2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076. Oct 24 12:59:22.728710 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:22.767557 containerd[1596]: time="2025-10-24T12:59:22.767447545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-kq7xf,Uid:1df758c2-0a15-4c25-ae50-92f07922f451,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f2b3b43def255c674f938afe1a79754ce0304d6ea23dbb4ddfa3b2cb72b8076\"" Oct 24 12:59:23.043463 containerd[1596]: time="2025-10-24T12:59:23.043315229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:23.045515 containerd[1596]: time="2025-10-24T12:59:23.045463156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 24 12:59:23.045606 containerd[1596]: time="2025-10-24T12:59:23.045551281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:23.045704 kubelet[2771]: E1024 12:59:23.045670 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 12:59:23.046161 kubelet[2771]: E1024 12:59:23.045713 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 12:59:23.046161 kubelet[2771]: E1024 12:59:23.045992 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gckmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mbvsm_calico-system(0234b6fc-2fab-4065-b52e-c071b4ca25f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:23.046272 containerd[1596]: time="2025-10-24T12:59:23.046063091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 24 12:59:23.047508 kubelet[2771]: E1024 12:59:23.047447 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:23.151782 systemd-networkd[1494]: calie4634e4a62f: Gained IPv6LL Oct 24 12:59:23.152229 systemd-networkd[1494]: cali3517d84f232: Gained IPv6LL Oct 24 12:59:23.198430 kubelet[2771]: E1024 12:59:23.198391 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:23.200261 kubelet[2771]: E1024 12:59:23.200218 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:23.201119 kubelet[2771]: E1024 12:59:23.201029 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548f66845d-v54tb" podUID="84b48c97-55ac-463f-8b03-ea5f0339b62a" Oct 24 12:59:23.201215 kubelet[2771]: E1024 12:59:23.201062 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:23.241807 kubelet[2771]: I1024 12:59:23.241739 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8rfp9" podStartSLOduration=37.241718684 podStartE2EDuration="37.241718684s" podCreationTimestamp="2025-10-24 12:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:59:23.240918242 +0000 UTC m=+43.268706430" watchObservedRunningTime="2025-10-24 12:59:23.241718684 +0000 UTC m=+43.269506872" Oct 24 12:59:23.359876 containerd[1596]: time="2025-10-24T12:59:23.359821875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:23.360958 containerd[1596]: time="2025-10-24T12:59:23.360905146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 24 12:59:23.361137 containerd[1596]: time="2025-10-24T12:59:23.360960299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:23.361208 kubelet[2771]: E1024 12:59:23.361165 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:23.361261 kubelet[2771]: E1024 12:59:23.361216 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:23.361399 kubelet[2771]: E1024 12:59:23.361355 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzkwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f84755f96-kq7xf_calico-apiserver(1df758c2-0a15-4c25-ae50-92f07922f451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:23.362652 kubelet[2771]: E1024 12:59:23.362536 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" podUID="1df758c2-0a15-4c25-ae50-92f07922f451" Oct 24 12:59:23.453500 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:54858.service - OpenSSH per-connection server daemon (10.0.0.1:54858). Oct 24 12:59:23.531977 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 54858 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:23.533637 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:23.538328 systemd-logind[1573]: New session 11 of user core. Oct 24 12:59:23.549753 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 24 12:59:23.600789 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Oct 24 12:59:23.689739 sshd[4596]: Connection closed by 10.0.0.1 port 54858 Oct 24 12:59:23.690004 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:23.695304 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:54858.service: Deactivated successfully. Oct 24 12:59:23.697590 systemd[1]: session-11.scope: Deactivated successfully. Oct 24 12:59:23.698461 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Oct 24 12:59:23.700107 systemd-logind[1573]: Removed session 11. Oct 24 12:59:23.727733 systemd-networkd[1494]: cali5dd2c3ca93f: Gained IPv6LL Oct 24 12:59:23.983802 systemd-networkd[1494]: cali4d31499828a: Gained IPv6LL Oct 24 12:59:24.201626 kubelet[2771]: E1024 12:59:24.201574 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:24.202330 kubelet[2771]: E1024 12:59:24.202085 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" podUID="1df758c2-0a15-4c25-ae50-92f07922f451" Oct 24 12:59:24.203304 kubelet[2771]: E1024 12:59:24.203252 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:24.239834 systemd-networkd[1494]: calib059ba1c1a8: Gained IPv6LL Oct 24 12:59:25.077547 containerd[1596]: time="2025-10-24T12:59:25.077453651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-zdlgj,Uid:e3fbadc6-9842-46a9-bf09-28743c711629,Namespace:calico-apiserver,Attempt:0,}" Oct 24 12:59:25.180147 systemd-networkd[1494]: califa692d415f1: Link UP Oct 24 12:59:25.180354 systemd-networkd[1494]: califa692d415f1: Gained carrier Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.115 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0 calico-apiserver-5f84755f96- calico-apiserver e3fbadc6-9842-46a9-bf09-28743c711629 883 0 2025-10-24 12:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f84755f96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f84755f96-zdlgj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa692d415f1 [] [] }} ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.115 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.144 [INFO][4628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" HandleID="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Workload="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.144 [INFO][4628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" HandleID="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Workload="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f84755f96-zdlgj", "timestamp":"2025-10-24 12:59:25.144124047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.144 [INFO][4628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.144 [INFO][4628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.144 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.152 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.156 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.160 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.162 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.164 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.164 [INFO][4628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.165 [INFO][4628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30 Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.168 [INFO][4628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.174 [INFO][4628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.174 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" host="localhost" Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.174 [INFO][4628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:25.195950 containerd[1596]: 2025-10-24 12:59:25.174 [INFO][4628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" HandleID="k8s-pod-network.d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Workload="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.177 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0", GenerateName:"calico-apiserver-5f84755f96-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3fbadc6-9842-46a9-bf09-28743c711629", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84755f96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f84755f96-zdlgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa692d415f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.177 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.177 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa692d415f1 ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.180 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.181 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0", GenerateName:"calico-apiserver-5f84755f96-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3fbadc6-9842-46a9-bf09-28743c711629", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84755f96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30", Pod:"calico-apiserver-5f84755f96-zdlgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa692d415f1", MAC:"b2:f2:49:eb:37:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:25.196643 containerd[1596]: 2025-10-24 12:59:25.190 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" Namespace="calico-apiserver" Pod="calico-apiserver-5f84755f96-zdlgj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84755f96--zdlgj-eth0" Oct 24 12:59:25.203069 kubelet[2771]: E1024 12:59:25.203021 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:25.237257 containerd[1596]: time="2025-10-24T12:59:25.237185979Z" level=info msg="connecting to shim d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30" address="unix:///run/containerd/s/6d937969acb60b231ff439da2cd12ea722d9828c1902f0aad35c2e3de9fb69d0" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:25.266743 systemd[1]: Started cri-containerd-d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30.scope - libcontainer container d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30. Oct 24 12:59:25.280804 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:25.313804 containerd[1596]: time="2025-10-24T12:59:25.313746781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84755f96-zdlgj,Uid:e3fbadc6-9842-46a9-bf09-28743c711629,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d943380497c56668de1d87fd2d63a6e12863de6e98a2cb62d6aef9221f607a30\"" Oct 24 12:59:25.315493 containerd[1596]: time="2025-10-24T12:59:25.315385784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 24 12:59:25.628399 containerd[1596]: time="2025-10-24T12:59:25.628314095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:25.887506 containerd[1596]: time="2025-10-24T12:59:25.887301448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 24 12:59:25.887506 containerd[1596]: time="2025-10-24T12:59:25.887377812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:25.887714 kubelet[2771]: E1024 12:59:25.887616 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:25.887714 kubelet[2771]: E1024 12:59:25.887677 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:25.888107 kubelet[2771]: E1024 12:59:25.887815 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqthm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f84755f96-zdlgj_calico-apiserver(e3fbadc6-9842-46a9-bf09-28743c711629): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:25.889045 kubelet[2771]: E1024 12:59:25.889003 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:26.077996 kubelet[2771]: E1024 12:59:26.077938 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:26.079930 containerd[1596]: time="2025-10-24T12:59:26.079546562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2tsg,Uid:a5ee21c4-521a-4300-831d-bec9b2d7f45e,Namespace:calico-system,Attempt:0,}" Oct 24 12:59:26.079930 containerd[1596]: time="2025-10-24T12:59:26.079639667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc6tk,Uid:172e0745-5548-410c-a502-38447aec237c,Namespace:kube-system,Attempt:0,}" Oct 24 12:59:26.206671 kubelet[2771]: E1024 12:59:26.206498 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:26.376654 systemd-networkd[1494]: cali3f878d2209f: Link UP Oct 24 12:59:26.376901 systemd-networkd[1494]: cali3f878d2209f: Gained carrier Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.126 [INFO][4695] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s2tsg-eth0 csi-node-driver- calico-system a5ee21c4-521a-4300-831d-bec9b2d7f45e 758 0 2025-10-24 12:58:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s2tsg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3f878d2209f [] [] }} ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.126 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.156 [INFO][4718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" HandleID="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Workload="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.156 [INFO][4718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" HandleID="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Workload="localhost-k8s-csi--node--driver--s2tsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s2tsg", "timestamp":"2025-10-24 12:59:26.156503696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.156 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.156 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.157 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.163 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.166 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.170 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.172 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.174 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.174 [INFO][4718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.175 [INFO][4718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3 Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.273 [INFO][4718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.365 [INFO][4718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.365 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" host="localhost" Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.365 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:26.392754 containerd[1596]: 2025-10-24 12:59:26.365 [INFO][4718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" HandleID="k8s-pod-network.b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Workload="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.371 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2tsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5ee21c4-521a-4300-831d-bec9b2d7f45e", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s2tsg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f878d2209f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.371 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.372 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f878d2209f ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.376 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.377 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2tsg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5ee21c4-521a-4300-831d-bec9b2d7f45e", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3", Pod:"csi-node-driver-s2tsg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f878d2209f", MAC:"8a:d9:cb:7f:6b:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:26.393550 containerd[1596]: 2025-10-24 12:59:26.385 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" Namespace="calico-system" Pod="csi-node-driver-s2tsg" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2tsg-eth0" Oct 24 12:59:26.448513 containerd[1596]: time="2025-10-24T12:59:26.447160302Z" level=info msg="connecting to shim b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3" address="unix:///run/containerd/s/6f0491cccd4433b0e049a05bc155b10c3a9ad64d40afd5298ba6121972770ad2" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:26.467128 systemd-networkd[1494]: calie885de8209c: Link UP Oct 24 12:59:26.474002 systemd-networkd[1494]: calie885de8209c: Gained carrier Oct 24 12:59:26.495748 systemd[1]: Started cri-containerd-b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3.scope - libcontainer container b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3. Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.131 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0 coredns-668d6bf9bc- kube-system 172e0745-5548-410c-a502-38447aec237c 878 0 2025-10-24 12:58:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sc6tk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie885de8209c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.131 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.158 [INFO][4722] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" HandleID="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Workload="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.158 [INFO][4722] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" HandleID="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Workload="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sc6tk", "timestamp":"2025-10-24 12:59:26.15853746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.158 [INFO][4722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.365 [INFO][4722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.367 [INFO][4722] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.373 [INFO][4722] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.383 [INFO][4722] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.395 [INFO][4722] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.398 [INFO][4722] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.401 [INFO][4722] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.401 [INFO][4722] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.404 [INFO][4722] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4 Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.411 [INFO][4722] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.426 [INFO][4722] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.426 [INFO][4722] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" host="localhost" Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.426 [INFO][4722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 24 12:59:26.504069 containerd[1596]: 2025-10-24 12:59:26.426 [INFO][4722] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" HandleID="k8s-pod-network.49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Workload="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.445 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"172e0745-5548-410c-a502-38447aec237c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sc6tk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie885de8209c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.445 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.445 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie885de8209c ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.473 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.475 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"172e0745-5548-410c-a502-38447aec237c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 24, 12, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4", Pod:"coredns-668d6bf9bc-sc6tk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie885de8209c", MAC:"ae:4c:c5:92:d6:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 24 12:59:26.504568 containerd[1596]: 2025-10-24 12:59:26.485 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" Namespace="kube-system" Pod="coredns-668d6bf9bc-sc6tk" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sc6tk-eth0" Oct 24 12:59:26.515661 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:26.530392 containerd[1596]: time="2025-10-24T12:59:26.530194801Z" level=info msg="connecting to shim 49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4" address="unix:///run/containerd/s/79a9214bd49fbff90f56cddd8d1b28df2ed636a6ef1f7f823c495577a113b108" namespace=k8s.io protocol=ttrpc version=3 Oct 24 12:59:26.541830 containerd[1596]: time="2025-10-24T12:59:26.541787438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2tsg,Uid:a5ee21c4-521a-4300-831d-bec9b2d7f45e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b88275b7abc9ac07e68e5bbbe0b20e2614a4dea518ff306ae365a964a25d1be3\"" Oct 24 12:59:26.544623 containerd[1596]: time="2025-10-24T12:59:26.544167201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 24 12:59:26.563752 systemd[1]: Started cri-containerd-49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4.scope - libcontainer container 49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4. Oct 24 12:59:26.576967 systemd-resolved[1295]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 24 12:59:26.772324 containerd[1596]: time="2025-10-24T12:59:26.772186324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc6tk,Uid:172e0745-5548-410c-a502-38447aec237c,Namespace:kube-system,Attempt:0,} returns sandbox id \"49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4\"" Oct 24 12:59:26.773209 kubelet[2771]: E1024 12:59:26.773166 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:26.774774 containerd[1596]: time="2025-10-24T12:59:26.774738830Z" level=info msg="CreateContainer within sandbox \"49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 24 12:59:26.793091 containerd[1596]: time="2025-10-24T12:59:26.793031171Z" level=info msg="Container d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346: CDI devices from CRI Config.CDIDevices: []" Oct 24 12:59:26.800927 containerd[1596]: time="2025-10-24T12:59:26.800866943Z" level=info msg="CreateContainer within sandbox \"49942219adbd2e958b4d72b5a54b8b0b420989a6ef2510148dd33caed0d7bcc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346\"" Oct 24 12:59:26.801643 containerd[1596]: time="2025-10-24T12:59:26.801441662Z" level=info msg="StartContainer for \"d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346\"" Oct 24 12:59:26.802692 containerd[1596]: time="2025-10-24T12:59:26.802583533Z" level=info msg="connecting to shim d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346" address="unix:///run/containerd/s/79a9214bd49fbff90f56cddd8d1b28df2ed636a6ef1f7f823c495577a113b108" protocol=ttrpc version=3 Oct 24 12:59:26.833738 systemd[1]: Started cri-containerd-d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346.scope - libcontainer container d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346. Oct 24 12:59:26.866214 containerd[1596]: time="2025-10-24T12:59:26.866164464Z" level=info msg="StartContainer for \"d57356d942fc4521b43df7a46987226f5f58ea4c7319716186228bcbf3647346\" returns successfully" Oct 24 12:59:26.878050 containerd[1596]: time="2025-10-24T12:59:26.877994908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:26.879575 containerd[1596]: time="2025-10-24T12:59:26.879540286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 24 12:59:26.879719 containerd[1596]: time="2025-10-24T12:59:26.879665921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 24 12:59:26.879945 kubelet[2771]: E1024 12:59:26.879861 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 24 12:59:26.880006 kubelet[2771]: E1024 12:59:26.879955 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 24 12:59:26.880228 kubelet[2771]: E1024 12:59:26.880153 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntf8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:26.882650 containerd[1596]: time="2025-10-24T12:59:26.882622244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 24 12:59:26.928770 systemd-networkd[1494]: califa692d415f1: Gained IPv6LL Oct 24 12:59:27.211016 kubelet[2771]: E1024 12:59:27.210870 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:27.213704 kubelet[2771]: E1024 12:59:27.213649 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:27.233734 containerd[1596]: time="2025-10-24T12:59:27.233660731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:27.385645 containerd[1596]: time="2025-10-24T12:59:27.385444844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 24 12:59:27.385645 containerd[1596]: time="2025-10-24T12:59:27.385514033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 24 12:59:27.385832 kubelet[2771]: E1024 12:59:27.385754 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 24 12:59:27.385832 kubelet[2771]: E1024 12:59:27.385792 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 24 12:59:27.385999 kubelet[2771]: E1024 12:59:27.385909 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntf8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:27.387958 kubelet[2771]: E1024 12:59:27.387328 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:27.549665 kubelet[2771]: I1024 12:59:27.548170 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sc6tk" podStartSLOduration=41.547686836 podStartE2EDuration="41.547686836s" podCreationTimestamp="2025-10-24 12:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-24 12:59:27.388227943 +0000 UTC m=+47.416016131" watchObservedRunningTime="2025-10-24 12:59:27.547686836 +0000 UTC m=+47.575475024" Oct 24 12:59:28.081379 systemd-networkd[1494]: calie885de8209c: Gained IPv6LL Oct 24 12:59:28.215839 kubelet[2771]: E1024 12:59:28.215779 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:28.218502 kubelet[2771]: E1024 12:59:28.218453 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:28.399844 systemd-networkd[1494]: cali3f878d2209f: Gained IPv6LL Oct 24 12:59:28.702729 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:54870.service - OpenSSH per-connection server daemon (10.0.0.1:54870). Oct 24 12:59:28.762777 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 54870 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:28.765324 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:28.770493 systemd-logind[1573]: New session 12 of user core. Oct 24 12:59:28.778748 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 24 12:59:28.890972 sshd[4898]: Connection closed by 10.0.0.1 port 54870 Oct 24 12:59:28.891291 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:28.902454 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:54870.service: Deactivated successfully. Oct 24 12:59:28.904472 systemd[1]: session-12.scope: Deactivated successfully. Oct 24 12:59:28.905482 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Oct 24 12:59:28.908899 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:54880.service - OpenSSH per-connection server daemon (10.0.0.1:54880). Oct 24 12:59:28.909626 systemd-logind[1573]: Removed session 12. Oct 24 12:59:28.964290 sshd[4912]: Accepted publickey for core from 10.0.0.1 port 54880 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:28.966172 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:28.970695 systemd-logind[1573]: New session 13 of user core. Oct 24 12:59:28.987746 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 24 12:59:29.133643 sshd[4915]: Connection closed by 10.0.0.1 port 54880 Oct 24 12:59:29.133730 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:29.144681 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:54880.service: Deactivated successfully. Oct 24 12:59:29.148588 systemd[1]: session-13.scope: Deactivated successfully. Oct 24 12:59:29.152956 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Oct 24 12:59:29.157427 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Oct 24 12:59:29.158549 systemd-logind[1573]: Removed session 13. Oct 24 12:59:29.218135 kubelet[2771]: E1024 12:59:29.218004 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:29.235298 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:29.237221 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:29.241854 systemd-logind[1573]: New session 14 of user core. Oct 24 12:59:29.252748 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 24 12:59:29.374009 sshd[4929]: Connection closed by 10.0.0.1 port 54896 Oct 24 12:59:29.374368 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:29.379473 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:54896.service: Deactivated successfully. Oct 24 12:59:29.381668 systemd[1]: session-14.scope: Deactivated successfully. Oct 24 12:59:29.382461 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Oct 24 12:59:29.383719 systemd-logind[1573]: Removed session 14. Oct 24 12:59:34.080203 containerd[1596]: time="2025-10-24T12:59:34.079395823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 24 12:59:34.386719 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:56390.service - OpenSSH per-connection server daemon (10.0.0.1:56390). Oct 24 12:59:34.451670 sshd[4952]: Accepted publickey for core from 10.0.0.1 port 56390 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:34.453004 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:34.457269 systemd-logind[1573]: New session 15 of user core. Oct 24 12:59:34.467736 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 24 12:59:34.567995 containerd[1596]: time="2025-10-24T12:59:34.567944827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:34.601571 sshd[4955]: Connection closed by 10.0.0.1 port 56390 Oct 24 12:59:34.603797 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:34.609359 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:56390.service: Deactivated successfully. Oct 24 12:59:34.611867 systemd[1]: session-15.scope: Deactivated successfully. Oct 24 12:59:34.613332 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Oct 24 12:59:34.614699 systemd-logind[1573]: Removed session 15. Oct 24 12:59:34.693812 containerd[1596]: time="2025-10-24T12:59:34.693648510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 24 12:59:34.693812 containerd[1596]: time="2025-10-24T12:59:34.693710627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 24 12:59:34.693959 kubelet[2771]: E1024 12:59:34.693898 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 12:59:34.694297 kubelet[2771]: E1024 12:59:34.693959 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 12:59:34.694297 kubelet[2771]: E1024 12:59:34.694123 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f0d78bab2caa44dca4b33359ea14bbbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jn4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548f66845d-v54tb_calico-system(84b48c97-55ac-463f-8b03-ea5f0339b62a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:34.696200 containerd[1596]: time="2025-10-24T12:59:34.696173986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 24 12:59:35.110782 containerd[1596]: time="2025-10-24T12:59:35.110733423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:35.203632 containerd[1596]: time="2025-10-24T12:59:35.203533429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 24 12:59:35.203632 containerd[1596]: time="2025-10-24T12:59:35.203572022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 24 12:59:35.203841 kubelet[2771]: E1024 12:59:35.203779 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 24 12:59:35.203904 kubelet[2771]: E1024 12:59:35.203840 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 24 12:59:35.204417 kubelet[2771]: E1024 12:59:35.204078 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jn4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548f66845d-v54tb_calico-system(84b48c97-55ac-463f-8b03-ea5f0339b62a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:35.204566 containerd[1596]: time="2025-10-24T12:59:35.204184450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 24 12:59:35.205912 kubelet[2771]: E1024 12:59:35.205870 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548f66845d-v54tb" podUID="84b48c97-55ac-463f-8b03-ea5f0339b62a" Oct 24 12:59:35.670983 containerd[1596]: time="2025-10-24T12:59:35.670945688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:35.761505 containerd[1596]: time="2025-10-24T12:59:35.761456412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 24 12:59:35.761561 containerd[1596]: time="2025-10-24T12:59:35.761537153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:35.761803 kubelet[2771]: E1024 12:59:35.761735 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:35.761803 kubelet[2771]: E1024 12:59:35.761798 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:35.762425 kubelet[2771]: E1024 12:59:35.761932 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzkwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f84755f96-kq7xf_calico-apiserver(1df758c2-0a15-4c25-ae50-92f07922f451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:35.763237 kubelet[2771]: E1024 12:59:35.763186 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" podUID="1df758c2-0a15-4c25-ae50-92f07922f451" Oct 24 12:59:36.078499 containerd[1596]: time="2025-10-24T12:59:36.078162245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 24 12:59:36.583419 containerd[1596]: time="2025-10-24T12:59:36.583350193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:36.584630 containerd[1596]: time="2025-10-24T12:59:36.584556947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 24 12:59:36.584791 containerd[1596]: time="2025-10-24T12:59:36.584650763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 24 12:59:36.584856 kubelet[2771]: E1024 12:59:36.584811 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 12:59:36.584902 kubelet[2771]: E1024 12:59:36.584870 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 12:59:36.585085 kubelet[2771]: E1024 12:59:36.585037 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkjtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf94597f6-kjgvp_calico-system(c2c88598-5e4d-4913-b78b-8bca3cd87835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:36.586253 kubelet[2771]: E1024 12:59:36.586206 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:37.077925 containerd[1596]: time="2025-10-24T12:59:37.077875178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 24 12:59:37.388858 containerd[1596]: time="2025-10-24T12:59:37.388777077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:37.389997 containerd[1596]: time="2025-10-24T12:59:37.389957602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 24 12:59:37.390092 containerd[1596]: time="2025-10-24T12:59:37.389993359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:37.390227 kubelet[2771]: E1024 12:59:37.390171 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 12:59:37.390675 kubelet[2771]: E1024 12:59:37.390231 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 12:59:37.390675 kubelet[2771]: E1024 12:59:37.390361 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gckmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mbvsm_calico-system(0234b6fc-2fab-4065-b52e-c071b4ca25f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:37.391571 kubelet[2771]: E1024 12:59:37.391527 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:39.618711 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:56404.service - OpenSSH per-connection server daemon (10.0.0.1:56404). Oct 24 12:59:39.680209 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 56404 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:39.681534 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:39.686053 systemd-logind[1573]: New session 16 of user core. Oct 24 12:59:39.696731 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 24 12:59:39.816302 sshd[4977]: Connection closed by 10.0.0.1 port 56404 Oct 24 12:59:39.816719 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:39.821460 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:56404.service: Deactivated successfully. Oct 24 12:59:39.823571 systemd[1]: session-16.scope: Deactivated successfully. Oct 24 12:59:39.824440 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Oct 24 12:59:39.825514 systemd-logind[1573]: Removed session 16. Oct 24 12:59:40.079882 containerd[1596]: time="2025-10-24T12:59:40.079258044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 24 12:59:40.495995 containerd[1596]: time="2025-10-24T12:59:40.495937628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:40.497190 containerd[1596]: time="2025-10-24T12:59:40.497156072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 24 12:59:40.497258 containerd[1596]: time="2025-10-24T12:59:40.497226847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 24 12:59:40.497363 kubelet[2771]: E1024 12:59:40.497316 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 24 12:59:40.497729 kubelet[2771]: E1024 12:59:40.497363 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 24 12:59:40.497729 kubelet[2771]: E1024 12:59:40.497486 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntf8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:40.499339 containerd[1596]: time="2025-10-24T12:59:40.499297063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 24 12:59:40.813563 containerd[1596]: time="2025-10-24T12:59:40.813411642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:40.857138 containerd[1596]: time="2025-10-24T12:59:40.857077656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 24 12:59:40.861314 containerd[1596]: time="2025-10-24T12:59:40.861219028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 24 12:59:40.861574 kubelet[2771]: E1024 12:59:40.861515 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 24 12:59:40.861574 kubelet[2771]: E1024 12:59:40.861572 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 24 12:59:40.861767 kubelet[2771]: E1024 12:59:40.861713 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntf8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-s2tsg_calico-system(a5ee21c4-521a-4300-831d-bec9b2d7f45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:40.862921 kubelet[2771]: E1024 12:59:40.862886 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:41.077669 containerd[1596]: time="2025-10-24T12:59:41.077481374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 24 12:59:41.620721 containerd[1596]: time="2025-10-24T12:59:41.620667611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 12:59:41.659045 containerd[1596]: time="2025-10-24T12:59:41.659003243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 24 12:59:41.659123 containerd[1596]: time="2025-10-24T12:59:41.659053539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 24 12:59:41.659246 kubelet[2771]: E1024 12:59:41.659180 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:41.659498 kubelet[2771]: E1024 12:59:41.659249 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 24 12:59:41.659498 kubelet[2771]: E1024 12:59:41.659368 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqthm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f84755f96-zdlgj_calico-apiserver(e3fbadc6-9842-46a9-bf09-28743c711629): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 24 12:59:41.660549 kubelet[2771]: E1024 12:59:41.660517 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:44.832729 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:51518.service - OpenSSH per-connection server daemon (10.0.0.1:51518). Oct 24 12:59:44.904028 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 51518 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:44.905548 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:44.910254 systemd-logind[1573]: New session 17 of user core. Oct 24 12:59:44.919748 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 24 12:59:45.036686 sshd[5001]: Connection closed by 10.0.0.1 port 51518 Oct 24 12:59:45.037093 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:45.041320 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:51518.service: Deactivated successfully. Oct 24 12:59:45.043566 systemd[1]: session-17.scope: Deactivated successfully. Oct 24 12:59:45.045279 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Oct 24 12:59:45.047127 systemd-logind[1573]: Removed session 17. Oct 24 12:59:48.078845 kubelet[2771]: E1024 12:59:48.078752 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548f66845d-v54tb" podUID="84b48c97-55ac-463f-8b03-ea5f0339b62a" Oct 24 12:59:50.049739 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:51532.service - OpenSSH per-connection server daemon (10.0.0.1:51532). Oct 24 12:59:50.080613 kubelet[2771]: E1024 12:59:50.079086 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 12:59:50.112767 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 51532 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:50.114355 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:50.118880 systemd-logind[1573]: New session 18 of user core. Oct 24 12:59:50.128758 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 24 12:59:50.245749 sshd[5021]: Connection closed by 10.0.0.1 port 51532 Oct 24 12:59:50.246181 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:50.258662 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:51532.service: Deactivated successfully. Oct 24 12:59:50.260774 systemd[1]: session-18.scope: Deactivated successfully. Oct 24 12:59:50.261640 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Oct 24 12:59:50.264638 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:51534.service - OpenSSH per-connection server daemon (10.0.0.1:51534). Oct 24 12:59:50.265333 systemd-logind[1573]: Removed session 18. Oct 24 12:59:50.323372 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 51534 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:50.324659 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:50.328833 systemd-logind[1573]: New session 19 of user core. Oct 24 12:59:50.340712 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 24 12:59:50.530844 sshd[5038]: Connection closed by 10.0.0.1 port 51534 Oct 24 12:59:50.531207 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:50.540690 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:51534.service: Deactivated successfully. Oct 24 12:59:50.542875 systemd[1]: session-19.scope: Deactivated successfully. Oct 24 12:59:50.543639 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Oct 24 12:59:50.547222 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:51548.service - OpenSSH per-connection server daemon (10.0.0.1:51548). Oct 24 12:59:50.548306 systemd-logind[1573]: Removed session 19. Oct 24 12:59:50.621499 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 51548 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:50.622804 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:50.627193 systemd-logind[1573]: New session 20 of user core. Oct 24 12:59:50.634744 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 24 12:59:51.093617 kubelet[2771]: E1024 12:59:51.093528 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-kq7xf" podUID="1df758c2-0a15-4c25-ae50-92f07922f451" Oct 24 12:59:51.191815 kubelet[2771]: E1024 12:59:51.191769 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:51.241940 sshd[5052]: Connection closed by 10.0.0.1 port 51548 Oct 24 12:59:51.244755 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:51.259545 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:51548.service: Deactivated successfully. Oct 24 12:59:51.262287 systemd[1]: session-20.scope: Deactivated successfully. Oct 24 12:59:51.267177 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Oct 24 12:59:51.268924 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Oct 24 12:59:51.275706 systemd-logind[1573]: Removed session 20. Oct 24 12:59:51.322537 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:51.324353 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:51.330034 systemd-logind[1573]: New session 21 of user core. Oct 24 12:59:51.333769 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 24 12:59:51.346175 containerd[1596]: time="2025-10-24T12:59:51.346065839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24\" id:\"5c494f29c4283b60b7058d3a7e1bfdefe13a753416482d1d260c5fa58e469ce5\" pid:5086 exited_at:{seconds:1761310791 nanos:345637974}" Oct 24 12:59:51.348934 kubelet[2771]: E1024 12:59:51.348898 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:51.436092 containerd[1596]: time="2025-10-24T12:59:51.436040461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bb8c3d10ff123d3fb374cbda75bf75dfbf9695059fb87524eb6a09cd848da24\" id:\"bf804f1b04f87db974144e4af3ffff4f9dd5ec0cdf15260af7bc6a03dd36aca1\" pid:5114 exited_at:{seconds:1761310791 nanos:435689112}" Oct 24 12:59:51.565277 sshd[5100]: Connection closed by 10.0.0.1 port 51556 Oct 24 12:59:51.565710 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:51.574800 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:51556.service: Deactivated successfully. Oct 24 12:59:51.577073 systemd[1]: session-21.scope: Deactivated successfully. Oct 24 12:59:51.577942 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Oct 24 12:59:51.581567 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:51564.service - OpenSSH per-connection server daemon (10.0.0.1:51564). Oct 24 12:59:51.582465 systemd-logind[1573]: Removed session 21. Oct 24 12:59:51.641176 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 51564 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:51.643129 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:51.648622 systemd-logind[1573]: New session 22 of user core. Oct 24 12:59:51.656730 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 24 12:59:51.772724 sshd[5140]: Connection closed by 10.0.0.1 port 51564 Oct 24 12:59:51.773056 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:51.778343 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:51564.service: Deactivated successfully. Oct 24 12:59:51.780634 systemd[1]: session-22.scope: Deactivated successfully. Oct 24 12:59:51.781442 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Oct 24 12:59:51.782698 systemd-logind[1573]: Removed session 22. Oct 24 12:59:52.077673 kubelet[2771]: E1024 12:59:52.077543 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6" Oct 24 12:59:54.078620 kubelet[2771]: E1024 12:59:54.078415 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f84755f96-zdlgj" podUID="e3fbadc6-9842-46a9-bf09-28743c711629" Oct 24 12:59:54.080114 kubelet[2771]: E1024 12:59:54.078842 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s2tsg" podUID="a5ee21c4-521a-4300-831d-bec9b2d7f45e" Oct 24 12:59:55.077377 kubelet[2771]: E1024 12:59:55.077330 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 12:59:56.796736 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:42256.service - OpenSSH per-connection server daemon (10.0.0.1:42256). Oct 24 12:59:56.863198 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 42256 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 12:59:56.865488 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 12:59:56.871651 systemd-logind[1573]: New session 23 of user core. Oct 24 12:59:56.875789 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 24 12:59:56.990973 sshd[5161]: Connection closed by 10.0.0.1 port 42256 Oct 24 12:59:56.991857 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Oct 24 12:59:56.997853 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:42256.service: Deactivated successfully. Oct 24 12:59:56.999893 systemd[1]: session-23.scope: Deactivated successfully. Oct 24 12:59:57.000985 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Oct 24 12:59:57.002914 systemd-logind[1573]: Removed session 23. Oct 24 13:00:01.084618 containerd[1596]: time="2025-10-24T13:00:01.084541866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 24 13:00:02.010680 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:42258.service - OpenSSH per-connection server daemon (10.0.0.1:42258). Oct 24 13:00:02.077159 kubelet[2771]: E1024 13:00:02.077121 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 13:00:02.082425 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 13:00:02.084309 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 13:00:02.089194 systemd-logind[1573]: New session 24 of user core. Oct 24 13:00:02.099741 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 24 13:00:02.216284 sshd[5178]: Connection closed by 10.0.0.1 port 42258 Oct 24 13:00:02.216963 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Oct 24 13:00:02.221164 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:42258.service: Deactivated successfully. Oct 24 13:00:02.223308 systemd[1]: session-24.scope: Deactivated successfully. Oct 24 13:00:02.225963 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Oct 24 13:00:02.226956 systemd-logind[1573]: Removed session 24. Oct 24 13:00:02.272529 containerd[1596]: time="2025-10-24T13:00:02.272421516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 13:00:02.273686 containerd[1596]: time="2025-10-24T13:00:02.273645428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 24 13:00:02.273789 containerd[1596]: time="2025-10-24T13:00:02.273730850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 24 13:00:02.273952 kubelet[2771]: E1024 13:00:02.273899 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 13:00:02.273999 kubelet[2771]: E1024 13:00:02.273964 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 24 13:00:02.274156 kubelet[2771]: E1024 13:00:02.274115 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkjtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf94597f6-kjgvp_calico-system(c2c88598-5e4d-4913-b78b-8bca3cd87835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 24 13:00:02.275306 kubelet[2771]: E1024 13:00:02.275266 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf94597f6-kjgvp" podUID="c2c88598-5e4d-4913-b78b-8bca3cd87835" Oct 24 13:00:03.076951 kubelet[2771]: E1024 13:00:03.076839 2771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 24 13:00:03.078433 containerd[1596]: time="2025-10-24T13:00:03.078392086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 24 13:00:06.076402 containerd[1596]: time="2025-10-24T13:00:06.076348646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 13:00:06.203659 containerd[1596]: time="2025-10-24T13:00:06.203583298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 24 13:00:06.203798 containerd[1596]: time="2025-10-24T13:00:06.203655124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 24 13:00:06.203828 kubelet[2771]: E1024 13:00:06.203787 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 13:00:06.203828 kubelet[2771]: E1024 13:00:06.203819 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 24 13:00:06.204177 containerd[1596]: time="2025-10-24T13:00:06.204047908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 24 13:00:06.210720 kubelet[2771]: E1024 13:00:06.204050 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f0d78bab2caa44dca4b33359ea14bbbb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jn4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548f66845d-v54tb_calico-system(84b48c97-55ac-463f-8b03-ea5f0339b62a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 24 13:00:07.230885 systemd[1]: Started sshd@24-10.0.0.145:22-10.0.0.1:40538.service - OpenSSH per-connection server daemon (10.0.0.1:40538). Oct 24 13:00:07.293078 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 40538 ssh2: RSA SHA256:5QnmBGiMs+NgtP00jEHzaQJgePAYLYRQ869UILbwcj0 Oct 24 13:00:07.294399 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 24 13:00:07.298950 systemd-logind[1573]: New session 25 of user core. Oct 24 13:00:07.308732 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 24 13:00:07.418489 sshd[5202]: Connection closed by 10.0.0.1 port 40538 Oct 24 13:00:07.418851 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Oct 24 13:00:07.423186 systemd[1]: sshd@24-10.0.0.145:22-10.0.0.1:40538.service: Deactivated successfully. Oct 24 13:00:07.425318 systemd[1]: session-25.scope: Deactivated successfully. Oct 24 13:00:07.426148 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Oct 24 13:00:07.427436 systemd-logind[1573]: Removed session 25. Oct 24 13:00:08.582993 containerd[1596]: time="2025-10-24T13:00:08.582931929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 24 13:00:08.599176 containerd[1596]: time="2025-10-24T13:00:08.599113241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 24 13:00:08.599357 containerd[1596]: time="2025-10-24T13:00:08.599187221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 24 13:00:08.599501 kubelet[2771]: E1024 13:00:08.599435 2771 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 13:00:08.599931 kubelet[2771]: E1024 13:00:08.599501 2771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 24 13:00:08.599931 kubelet[2771]: E1024 13:00:08.599770 2771 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gckmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mbvsm_calico-system(0234b6fc-2fab-4065-b52e-c071b4ca25f6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 24 13:00:08.600098 containerd[1596]: time="2025-10-24T13:00:08.599961728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 24 13:00:08.601226 kubelet[2771]: E1024 13:00:08.601167 2771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mbvsm" podUID="0234b6fc-2fab-4065-b52e-c071b4ca25f6"