Nov 1 10:02:24.572037 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Sat Nov 1 08:12:41 -00 2025 Nov 1 10:02:24.572081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:02:24.572092 kernel: BIOS-provided physical RAM map: Nov 1 10:02:24.572099 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 10:02:24.572106 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 10:02:24.572113 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 10:02:24.572121 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 10:02:24.572128 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 10:02:24.572138 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 10:02:24.572144 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 10:02:24.572154 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 10:02:24.572161 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 10:02:24.572167 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 10:02:24.572174 kernel: NX (Execute Disable) protection: active Nov 1 10:02:24.572183 kernel: APIC: Static calls initialized Nov 1 10:02:24.572192 kernel: SMBIOS 2.8 present. Nov 1 10:02:24.572202 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 10:02:24.572210 kernel: DMI: Memory slots populated: 1/1 Nov 1 10:02:24.572217 kernel: Hypervisor detected: KVM Nov 1 10:02:24.572225 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 10:02:24.572232 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 10:02:24.572240 kernel: kvm-clock: using sched offset of 4725868272 cycles Nov 1 10:02:24.572248 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 10:02:24.572256 kernel: tsc: Detected 2794.750 MHz processor Nov 1 10:02:24.572267 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 10:02:24.572275 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 10:02:24.572283 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 10:02:24.572291 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 10:02:24.572299 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 10:02:24.572306 kernel: Using GB pages for direct mapping Nov 1 10:02:24.572314 kernel: ACPI: Early table checksum verification disabled Nov 1 10:02:24.572333 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 10:02:24.572341 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572349 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572356 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572364 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 10:02:24.572384 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572392 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572402 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572410 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 10:02:24.572422 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 10:02:24.572430 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 10:02:24.572438 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 10:02:24.572448 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 10:02:24.572456 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 10:02:24.572464 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 10:02:24.572472 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 10:02:24.572480 kernel: No NUMA configuration found Nov 1 10:02:24.572488 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 10:02:24.572496 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 1 10:02:24.572506 kernel: Zone ranges: Nov 1 10:02:24.572514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 10:02:24.572522 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 10:02:24.572530 kernel: Normal empty Nov 1 10:02:24.572538 kernel: Device empty Nov 1 10:02:24.572546 kernel: Movable zone start for each node Nov 1 10:02:24.572553 kernel: Early memory node ranges Nov 1 10:02:24.572564 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 10:02:24.572572 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 10:02:24.572580 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 10:02:24.572588 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 10:02:24.572596 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 10:02:24.572604 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 10:02:24.572615 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 10:02:24.572623 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 10:02:24.572633 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 10:02:24.572641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 10:02:24.572651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 10:02:24.572659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 10:02:24.572667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 10:02:24.572675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 10:02:24.572683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 10:02:24.572693 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 10:02:24.572701 kernel: TSC deadline timer available Nov 1 10:02:24.572709 kernel: CPU topo: Max. logical packages: 1 Nov 1 10:02:24.572717 kernel: CPU topo: Max. logical dies: 1 Nov 1 10:02:24.572725 kernel: CPU topo: Max. dies per package: 1 Nov 1 10:02:24.572733 kernel: CPU topo: Max. threads per core: 1 Nov 1 10:02:24.572740 kernel: CPU topo: Num. cores per package: 4 Nov 1 10:02:24.572750 kernel: CPU topo: Num. threads per package: 4 Nov 1 10:02:24.572758 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 1 10:02:24.572766 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 10:02:24.572774 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 10:02:24.572782 kernel: kvm-guest: setup PV sched yield Nov 1 10:02:24.572790 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 10:02:24.572798 kernel: Booting paravirtualized kernel on KVM Nov 1 10:02:24.572806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 10:02:24.572816 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 10:02:24.572824 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 1 10:02:24.572832 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 1 10:02:24.572840 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 10:02:24.572848 kernel: kvm-guest: PV spinlocks enabled Nov 1 10:02:24.572855 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 10:02:24.572865 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:02:24.572875 kernel: random: crng init done Nov 1 10:02:24.572883 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 10:02:24.572891 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 10:02:24.572899 kernel: Fallback order for Node 0: 0 Nov 1 10:02:24.572907 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 1 10:02:24.572915 kernel: Policy zone: DMA32 Nov 1 10:02:24.572923 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 10:02:24.572933 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 10:02:24.572941 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 10:02:24.572949 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 10:02:24.572957 kernel: Dynamic Preempt: voluntary Nov 1 10:02:24.572965 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 10:02:24.572976 kernel: rcu: RCU event tracing is enabled. Nov 1 10:02:24.572984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 10:02:24.572995 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 10:02:24.573005 kernel: Rude variant of Tasks RCU enabled. Nov 1 10:02:24.573013 kernel: Tracing variant of Tasks RCU enabled. Nov 1 10:02:24.573021 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 10:02:24.573029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 10:02:24.573037 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:02:24.573045 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:02:24.573055 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 10:02:24.573063 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 10:02:24.573072 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 10:02:24.573086 kernel: Console: colour VGA+ 80x25 Nov 1 10:02:24.573096 kernel: printk: legacy console [ttyS0] enabled Nov 1 10:02:24.573105 kernel: ACPI: Core revision 20240827 Nov 1 10:02:24.573113 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 10:02:24.573121 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 10:02:24.573129 kernel: x2apic enabled Nov 1 10:02:24.573138 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 10:02:24.573151 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 10:02:24.573159 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 10:02:24.573168 kernel: kvm-guest: setup PV IPIs Nov 1 10:02:24.573178 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 10:02:24.573186 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 1 10:02:24.573195 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 1 10:02:24.573203 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 10:02:24.573211 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 10:02:24.573220 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 10:02:24.573228 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 10:02:24.573238 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 10:02:24.573247 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 10:02:24.573255 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 10:02:24.573263 kernel: active return thunk: retbleed_return_thunk Nov 1 10:02:24.573272 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 10:02:24.573280 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 10:02:24.573288 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 10:02:24.573299 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 10:02:24.573308 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 10:02:24.573316 kernel: active return thunk: srso_return_thunk Nov 1 10:02:24.573331 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 10:02:24.573340 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 10:02:24.573348 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 10:02:24.573356 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 10:02:24.573388 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 10:02:24.573397 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 10:02:24.573406 kernel: Freeing SMP alternatives memory: 32K Nov 1 10:02:24.573414 kernel: pid_max: default: 32768 minimum: 301 Nov 1 10:02:24.573422 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 10:02:24.573430 kernel: landlock: Up and running. Nov 1 10:02:24.573438 kernel: SELinux: Initializing. Nov 1 10:02:24.573452 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:02:24.573460 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 10:02:24.573469 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 10:02:24.573477 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 10:02:24.573486 kernel: ... version: 0 Nov 1 10:02:24.573494 kernel: ... bit width: 48 Nov 1 10:02:24.573502 kernel: ... generic registers: 6 Nov 1 10:02:24.573512 kernel: ... value mask: 0000ffffffffffff Nov 1 10:02:24.573521 kernel: ... max period: 00007fffffffffff Nov 1 10:02:24.573529 kernel: ... fixed-purpose events: 0 Nov 1 10:02:24.573537 kernel: ... event mask: 000000000000003f Nov 1 10:02:24.573545 kernel: signal: max sigframe size: 1776 Nov 1 10:02:24.573553 kernel: rcu: Hierarchical SRCU implementation. Nov 1 10:02:24.573562 kernel: rcu: Max phase no-delay instances is 400. Nov 1 10:02:24.573570 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 1 10:02:24.573581 kernel: smp: Bringing up secondary CPUs ... Nov 1 10:02:24.573589 kernel: smpboot: x86: Booting SMP configuration: Nov 1 10:02:24.573597 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 10:02:24.573606 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 10:02:24.573614 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 1 10:02:24.573623 kernel: Memory: 2447344K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 118472K reserved, 0K cma-reserved) Nov 1 10:02:24.573631 kernel: devtmpfs: initialized Nov 1 10:02:24.573642 kernel: x86/mm: Memory block size: 128MB Nov 1 10:02:24.573650 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 10:02:24.573659 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 10:02:24.573667 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 10:02:24.573675 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 10:02:24.573684 kernel: audit: initializing netlink subsys (disabled) Nov 1 10:02:24.573692 kernel: audit: type=2000 audit(1761991341.015:1): state=initialized audit_enabled=0 res=1 Nov 1 10:02:24.573702 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 10:02:24.573710 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 10:02:24.573719 kernel: cpuidle: using governor menu Nov 1 10:02:24.573727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 10:02:24.573735 kernel: dca service started, version 1.12.1 Nov 1 10:02:24.573743 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 1 10:02:24.573752 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 10:02:24.573762 kernel: PCI: Using configuration type 1 for base access Nov 1 10:02:24.573771 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 10:02:24.573779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 10:02:24.573787 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 10:02:24.573796 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 10:02:24.573804 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 10:02:24.573812 kernel: ACPI: Added _OSI(Module Device) Nov 1 10:02:24.573823 kernel: ACPI: Added _OSI(Processor Device) Nov 1 10:02:24.573831 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 10:02:24.573839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 10:02:24.573847 kernel: ACPI: Interpreter enabled Nov 1 10:02:24.573855 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 10:02:24.573864 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 10:02:24.573872 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 10:02:24.573882 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 10:02:24.573891 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 10:02:24.573899 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 10:02:24.574152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 10:02:24.574345 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 10:02:24.574676 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 10:02:24.574723 kernel: PCI host bridge to bus 0000:00 Nov 1 10:02:24.574918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 10:02:24.575085 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 10:02:24.575246 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 10:02:24.575914 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 10:02:24.576105 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 10:02:24.576289 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 10:02:24.576479 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 10:02:24.576685 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 10:02:24.576894 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 1 10:02:24.577073 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 1 10:02:24.577259 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 1 10:02:24.577792 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 1 10:02:24.577982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 10:02:24.578176 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 1 10:02:24.578386 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 1 10:02:24.578568 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 1 10:02:24.578779 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 10:02:24.578964 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 10:02:24.579145 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 1 10:02:24.579318 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 1 10:02:24.579572 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 10:02:24.579772 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 10:02:24.579963 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 1 10:02:24.580136 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 1 10:02:24.580314 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 10:02:24.580538 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 1 10:02:24.580721 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 10:02:24.580913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 10:02:24.581103 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 10:02:24.581302 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 1 10:02:24.581505 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 1 10:02:24.581697 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 10:02:24.581886 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 1 10:02:24.581900 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 10:02:24.581910 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 10:02:24.581921 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 10:02:24.581930 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 10:02:24.581939 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 10:02:24.581948 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 10:02:24.581967 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 10:02:24.581976 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 10:02:24.581984 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 10:02:24.581993 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 10:02:24.582002 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 10:02:24.582011 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 10:02:24.582019 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 10:02:24.582035 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 10:02:24.582044 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 10:02:24.582053 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 10:02:24.582061 kernel: iommu: Default domain type: Translated Nov 1 10:02:24.582070 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 10:02:24.582079 kernel: PCI: Using ACPI for IRQ routing Nov 1 10:02:24.582087 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 10:02:24.582096 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 10:02:24.582111 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 10:02:24.582287 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 10:02:24.582496 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 10:02:24.582669 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 10:02:24.582680 kernel: vgaarb: loaded Nov 1 10:02:24.582689 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 10:02:24.582711 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 10:02:24.582720 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 10:02:24.582729 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 10:02:24.582738 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 10:02:24.582747 kernel: pnp: PnP ACPI init Nov 1 10:02:24.582933 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 10:02:24.582946 kernel: pnp: PnP ACPI: found 6 devices Nov 1 10:02:24.582958 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 10:02:24.582967 kernel: NET: Registered PF_INET protocol family Nov 1 10:02:24.582976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 10:02:24.582985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 10:02:24.582993 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 10:02:24.583002 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 10:02:24.583011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 10:02:24.583022 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 10:02:24.583031 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:02:24.583040 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 10:02:24.583049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 10:02:24.583058 kernel: NET: Registered PF_XDP protocol family Nov 1 10:02:24.583228 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 10:02:24.583457 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 10:02:24.583630 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 10:02:24.583796 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 10:02:24.583958 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 10:02:24.584119 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 10:02:24.584130 kernel: PCI: CLS 0 bytes, default 64 Nov 1 10:02:24.584140 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 1 10:02:24.584153 kernel: Initialise system trusted keyrings Nov 1 10:02:24.584162 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 10:02:24.584171 kernel: Key type asymmetric registered Nov 1 10:02:24.584179 kernel: Asymmetric key parser 'x509' registered Nov 1 10:02:24.584188 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 10:02:24.584197 kernel: io scheduler mq-deadline registered Nov 1 10:02:24.584206 kernel: io scheduler kyber registered Nov 1 10:02:24.584217 kernel: io scheduler bfq registered Nov 1 10:02:24.584226 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 10:02:24.584236 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 10:02:24.584244 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 10:02:24.584253 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 10:02:24.584262 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 10:02:24.584270 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 10:02:24.584279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 10:02:24.584290 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 10:02:24.584299 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 10:02:24.584308 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 10:02:24.584521 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 10:02:24.584691 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 10:02:24.584856 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T10:02:22 UTC (1761991342) Nov 1 10:02:24.585111 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 10:02:24.585124 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 10:02:24.585134 kernel: NET: Registered PF_INET6 protocol family Nov 1 10:02:24.585142 kernel: Segment Routing with IPv6 Nov 1 10:02:24.585150 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 10:02:24.585159 kernel: NET: Registered PF_PACKET protocol family Nov 1 10:02:24.585168 kernel: Key type dns_resolver registered Nov 1 10:02:24.585180 kernel: IPI shorthand broadcast: enabled Nov 1 10:02:24.585189 kernel: sched_clock: Marking stable (1954011211, 236173385)->(2255411977, -65227381) Nov 1 10:02:24.585198 kernel: registered taskstats version 1 Nov 1 10:02:24.585206 kernel: Loading compiled-in X.509 certificates Nov 1 10:02:24.585215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d8ad6d63e9d0f6e32055e659cacaf9092255a45e' Nov 1 10:02:24.585224 kernel: Demotion targets for Node 0: null Nov 1 10:02:24.585232 kernel: Key type .fscrypt registered Nov 1 10:02:24.585251 kernel: Key type fscrypt-provisioning registered Nov 1 10:02:24.585259 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 10:02:24.585268 kernel: ima: Allocated hash algorithm: sha1 Nov 1 10:02:24.585277 kernel: ima: No architecture policies found Nov 1 10:02:24.585285 kernel: clk: Disabling unused clocks Nov 1 10:02:24.585294 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 1 10:02:24.585303 kernel: Write protecting the kernel read-only data: 45056k Nov 1 10:02:24.585332 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 1 10:02:24.585342 kernel: Run /init as init process Nov 1 10:02:24.585351 kernel: with arguments: Nov 1 10:02:24.585360 kernel: /init Nov 1 10:02:24.585384 kernel: with environment: Nov 1 10:02:24.585393 kernel: HOME=/ Nov 1 10:02:24.585401 kernel: TERM=linux Nov 1 10:02:24.585410 kernel: SCSI subsystem initialized Nov 1 10:02:24.585428 kernel: libata version 3.00 loaded. Nov 1 10:02:24.585617 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 10:02:24.585683 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 10:02:24.585860 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 10:02:24.586037 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 10:02:24.586215 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 10:02:24.586458 kernel: scsi host0: ahci Nov 1 10:02:24.586650 kernel: scsi host1: ahci Nov 1 10:02:24.586848 kernel: scsi host2: ahci Nov 1 10:02:24.587034 kernel: scsi host3: ahci Nov 1 10:02:24.587220 kernel: scsi host4: ahci Nov 1 10:02:24.587458 kernel: scsi host5: ahci Nov 1 10:02:24.587473 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 1 10:02:24.587482 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 1 10:02:24.587491 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 1 10:02:24.587500 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 1 10:02:24.587509 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 1 10:02:24.587530 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 1 10:02:24.587539 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 10:02:24.587548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 10:02:24.587557 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 10:02:24.587566 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 10:02:24.587575 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 10:02:24.587584 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 10:02:24.587605 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:02:24.587621 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 10:02:24.587633 kernel: ata3.00: applying bridge limits Nov 1 10:02:24.587645 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 10:02:24.587656 kernel: ata3.00: configured for UDMA/100 Nov 1 10:02:24.587906 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 10:02:24.588120 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 10:02:24.588351 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 1 10:02:24.588364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 10:02:24.588397 kernel: GPT:16515071 != 27000831 Nov 1 10:02:24.588406 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 10:02:24.588415 kernel: GPT:16515071 != 27000831 Nov 1 10:02:24.588424 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 10:02:24.588444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 10:02:24.588688 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 10:02:24.588705 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 10:02:24.589038 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 10:02:24.589067 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 10:02:24.589089 kernel: device-mapper: uevent: version 1.0.3 Nov 1 10:02:24.589106 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 10:02:24.589146 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 10:02:24.589176 kernel: raid6: avx2x4 gen() 29837 MB/s Nov 1 10:02:24.589197 kernel: raid6: avx2x2 gen() 29282 MB/s Nov 1 10:02:24.589222 kernel: raid6: avx2x1 gen() 24886 MB/s Nov 1 10:02:24.589252 kernel: raid6: using algorithm avx2x4 gen() 29837 MB/s Nov 1 10:02:24.589273 kernel: raid6: .... xor() 6708 MB/s, rmw enabled Nov 1 10:02:24.589293 kernel: raid6: using avx2x2 recovery algorithm Nov 1 10:02:24.589310 kernel: xor: automatically using best checksumming function avx Nov 1 10:02:24.589334 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 10:02:24.589355 kernel: BTRFS: device fsid 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Nov 1 10:02:24.589397 kernel: BTRFS info (device dm-0): first mount of filesystem 8763e8a0-bf7f-4ffe-acc8-da149b03dd0b Nov 1 10:02:24.589433 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:02:24.589459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 10:02:24.589479 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 10:02:24.589500 kernel: loop: module loaded Nov 1 10:02:24.589521 kernel: loop0: detected capacity change from 0 to 100136 Nov 1 10:02:24.589541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 10:02:24.589563 systemd[1]: Successfully made /usr/ read-only. Nov 1 10:02:24.589606 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:02:24.589627 systemd[1]: Detected virtualization kvm. Nov 1 10:02:24.589649 systemd[1]: Detected architecture x86-64. Nov 1 10:02:24.589670 systemd[1]: Running in initrd. Nov 1 10:02:24.589691 systemd[1]: No hostname configured, using default hostname. Nov 1 10:02:24.589716 systemd[1]: Hostname set to . Nov 1 10:02:24.589747 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:02:24.589772 systemd[1]: Queued start job for default target initrd.target. Nov 1 10:02:24.589794 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:02:24.589803 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:02:24.589813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:02:24.589823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 10:02:24.589833 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:02:24.589851 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 10:02:24.589861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 10:02:24.589871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:02:24.589880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:02:24.589889 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:02:24.589906 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:02:24.589915 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:02:24.589924 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:02:24.589934 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:02:24.589943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:02:24.589953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:02:24.589963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 10:02:24.589980 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 10:02:24.589989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:02:24.589999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:02:24.590008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:02:24.590018 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:02:24.590028 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 10:02:24.590037 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 10:02:24.590053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:02:24.590063 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 10:02:24.590073 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 10:02:24.590083 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 10:02:24.590098 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:02:24.590108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:02:24.590117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:02:24.590134 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 10:02:24.590144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:02:24.590153 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 10:02:24.590173 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 10:02:24.590213 systemd-journald[318]: Collecting audit messages is disabled. Nov 1 10:02:24.590236 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 10:02:24.590255 systemd-journald[318]: Journal started Nov 1 10:02:24.590277 systemd-journald[318]: Runtime Journal (/run/log/journal/a2bbfc43cc08424ab038f1de13ec0eaf) is 6M, max 48.2M, 42.2M free. Nov 1 10:02:24.593394 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:02:24.596343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 10:02:24.603384 kernel: Bridge firewalling registered Nov 1 10:02:24.599319 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 1 10:02:24.606041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:02:24.676740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:02:24.685011 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 10:02:24.688134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:02:24.696354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:02:24.698669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:02:24.713531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:02:24.717708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:02:24.718440 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 10:02:24.718685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:02:24.728804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 10:02:24.731287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:02:24.735381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:02:24.770020 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=91cbcb3658f876d239d31cc29b206c4e950f20e536a8e14bd58a23c6f0ecf128 Nov 1 10:02:24.808291 systemd-resolved[357]: Positive Trust Anchors: Nov 1 10:02:24.808307 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:02:24.808311 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:02:24.808350 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:02:24.845586 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 1 10:02:24.847669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:02:24.848423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:02:24.925436 kernel: Loading iSCSI transport class v2.0-870. Nov 1 10:02:24.940417 kernel: iscsi: registered transport (tcp) Nov 1 10:02:24.965456 kernel: iscsi: registered transport (qla4xxx) Nov 1 10:02:24.965537 kernel: QLogic iSCSI HBA Driver Nov 1 10:02:24.996174 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:02:25.025428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:02:25.026458 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:02:25.099691 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 10:02:25.103175 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 10:02:25.105606 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 10:02:25.151953 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:02:25.157822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:02:25.208963 systemd-udevd[601]: Using default interface naming scheme 'v257'. Nov 1 10:02:25.228567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:02:25.234703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 10:02:25.266873 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Nov 1 10:02:25.275940 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:02:25.279303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:02:25.313022 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:02:25.316614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:02:25.346697 systemd-networkd[713]: lo: Link UP Nov 1 10:02:25.346707 systemd-networkd[713]: lo: Gained carrier Nov 1 10:02:25.347489 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:02:25.348346 systemd[1]: Reached target network.target - Network. Nov 1 10:02:25.434541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:02:25.442531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 10:02:25.540087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 10:02:25.560047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 10:02:25.577791 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 10:02:25.589401 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 10:02:25.594571 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:02:25.594591 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:02:25.595467 systemd-networkd[713]: eth0: Link UP Nov 1 10:02:25.597105 systemd-networkd[713]: eth0: Gained carrier Nov 1 10:02:25.597118 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:02:25.613418 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 1 10:02:25.618472 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:02:25.624179 kernel: AES CTR mode by8 optimization enabled Nov 1 10:02:25.619012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:02:25.629113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 10:02:25.634606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:02:25.634864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:02:25.641983 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:02:25.653292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:02:25.666749 disk-uuid[835]: Primary Header is updated. Nov 1 10:02:25.666749 disk-uuid[835]: Secondary Entries is updated. Nov 1 10:02:25.666749 disk-uuid[835]: Secondary Header is updated. Nov 1 10:02:25.678029 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 10:02:25.683915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:02:25.690727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:02:25.696485 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:02:25.701668 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 10:02:25.815703 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:02:25.835524 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:02:26.721011 disk-uuid[838]: Warning: The kernel is still using the old partition table. Nov 1 10:02:26.721011 disk-uuid[838]: The new table will be used at the next reboot or after you Nov 1 10:02:26.721011 disk-uuid[838]: run partprobe(8) or kpartx(8) Nov 1 10:02:26.721011 disk-uuid[838]: The operation has completed successfully. Nov 1 10:02:26.732125 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 10:02:26.732318 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 10:02:26.736522 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 10:02:26.777668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Nov 1 10:02:26.777792 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:02:26.777811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:02:26.783205 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:02:26.783256 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:02:26.792421 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:02:26.792937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 10:02:26.798495 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 10:02:27.194662 ignition[883]: Ignition 2.22.0 Nov 1 10:02:27.194676 ignition[883]: Stage: fetch-offline Nov 1 10:02:27.194730 ignition[883]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:27.194743 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:27.195090 ignition[883]: parsed url from cmdline: "" Nov 1 10:02:27.195094 ignition[883]: no config URL provided Nov 1 10:02:27.195103 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 10:02:27.195117 ignition[883]: no config at "/usr/lib/ignition/user.ign" Nov 1 10:02:27.195183 ignition[883]: op(1): [started] loading QEMU firmware config module Nov 1 10:02:27.195190 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 10:02:27.210155 ignition[883]: op(1): [finished] loading QEMU firmware config module Nov 1 10:02:27.297916 ignition[883]: parsing config with SHA512: c0cf39d1e6db82eb2f9025473a7ee092b628680cf4c2833a46ba780a5702f999f23749d37dd950dec6c78eaa44dde982252752f768ad417db256e1218072117b Nov 1 10:02:27.305461 unknown[883]: fetched base config from "system" Nov 1 10:02:27.305476 unknown[883]: fetched user config from "qemu" Nov 1 10:02:27.305973 ignition[883]: fetch-offline: fetch-offline passed Nov 1 10:02:27.306088 ignition[883]: Ignition finished successfully Nov 1 10:02:27.311096 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:02:27.314615 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 10:02:27.316308 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 10:02:27.375982 systemd-networkd[713]: eth0: Gained IPv6LL Nov 1 10:02:27.414536 ignition[894]: Ignition 2.22.0 Nov 1 10:02:27.414552 ignition[894]: Stage: kargs Nov 1 10:02:27.414747 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:27.414761 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:27.416195 ignition[894]: kargs: kargs passed Nov 1 10:02:27.416271 ignition[894]: Ignition finished successfully Nov 1 10:02:27.425930 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 10:02:27.428244 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 10:02:27.531288 ignition[902]: Ignition 2.22.0 Nov 1 10:02:27.531300 ignition[902]: Stage: disks Nov 1 10:02:27.531463 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:27.531472 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:27.532415 ignition[902]: disks: disks passed Nov 1 10:02:27.532466 ignition[902]: Ignition finished successfully Nov 1 10:02:27.542992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 10:02:27.543964 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 10:02:27.546857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 10:02:27.550266 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:02:27.554051 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:02:27.557295 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:02:27.560763 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 10:02:27.632902 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 1 10:02:27.647772 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 10:02:27.654069 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 10:02:27.841430 kernel: EXT4-fs (vda9): mounted filesystem 9a0b584a-8c68-48a6-a0f9-92613ad0f15d r/w with ordered data mode. Quota mode: none. Nov 1 10:02:27.843343 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 10:02:27.845061 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 10:02:27.848906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:02:27.853344 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 10:02:27.855361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 10:02:27.855438 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 10:02:27.855510 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:02:27.874037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Nov 1 10:02:27.865793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 10:02:27.878811 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:02:27.878840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:02:27.869870 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 10:02:27.883644 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:02:27.883668 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:02:27.885628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:02:27.938790 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 10:02:27.945711 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 1 10:02:27.953159 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 10:02:27.958847 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 10:02:28.092726 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 10:02:28.096659 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 10:02:28.099437 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 10:02:28.123130 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 10:02:28.126036 kernel: BTRFS info (device vda6): last unmount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:02:28.149733 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 10:02:28.179494 ignition[1034]: INFO : Ignition 2.22.0 Nov 1 10:02:28.179494 ignition[1034]: INFO : Stage: mount Nov 1 10:02:28.182472 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:28.182472 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:28.182472 ignition[1034]: INFO : mount: mount passed Nov 1 10:02:28.182472 ignition[1034]: INFO : Ignition finished successfully Nov 1 10:02:28.191897 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 10:02:28.196675 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 10:02:28.844654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 10:02:28.876395 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Nov 1 10:02:28.879875 kernel: BTRFS info (device vda6): first mount of filesystem 75c18d9e-3deb-43e1-a433-af20f45ab517 Nov 1 10:02:28.879899 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 10:02:28.884195 kernel: BTRFS info (device vda6): turning on async discard Nov 1 10:02:28.884217 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 10:02:28.886389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 10:02:28.980956 ignition[1063]: INFO : Ignition 2.22.0 Nov 1 10:02:28.980956 ignition[1063]: INFO : Stage: files Nov 1 10:02:28.983874 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:28.983874 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:28.983874 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 1 10:02:28.983874 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 10:02:28.983874 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 10:02:28.994076 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 10:02:28.994076 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 10:02:28.994076 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 10:02:28.994076 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 10:02:28.994076 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 10:02:28.988426 unknown[1063]: wrote ssh authorized keys file for user: core Nov 1 10:02:29.060115 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 10:02:29.108695 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 10:02:29.108695 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 10:02:29.116274 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:02:29.147925 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:02:29.147925 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:02:29.147925 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 10:02:29.530626 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 10:02:30.152493 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 10:02:30.152493 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 10:02:30.158947 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:02:30.164562 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 10:02:30.164562 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 10:02:30.164562 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 10:02:30.173957 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:02:30.173957 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 10:02:30.173957 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 10:02:30.173957 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 10:02:30.195516 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:02:30.200458 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 10:02:30.203440 ignition[1063]: INFO : files: files passed Nov 1 10:02:30.203440 ignition[1063]: INFO : Ignition finished successfully Nov 1 10:02:30.210774 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 10:02:30.214059 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 10:02:30.215502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 10:02:30.243851 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 10:02:30.244037 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 10:02:30.249429 initrd-setup-root-after-ignition[1093]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 10:02:30.251755 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:02:30.251755 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:02:30.257057 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 10:02:30.254655 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:02:30.259361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 10:02:30.265877 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 10:02:30.327305 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 10:02:30.327527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 10:02:30.328649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 10:02:30.333872 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 10:02:30.337353 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 10:02:30.339639 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 10:02:30.376904 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:02:30.379617 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 10:02:30.409517 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 10:02:30.409666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:02:30.410903 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:02:30.411885 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 10:02:30.419866 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 10:02:30.420004 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 10:02:30.425350 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 10:02:30.426263 systemd[1]: Stopped target basic.target - Basic System. Nov 1 10:02:30.431822 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 10:02:30.433052 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 10:02:30.433904 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 10:02:30.440859 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 10:02:30.444412 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 10:02:30.447932 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 10:02:30.448805 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 10:02:30.455026 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 10:02:30.458270 systemd[1]: Stopped target swap.target - Swaps. Nov 1 10:02:30.461311 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 10:02:30.461496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 10:02:30.466219 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:02:30.467088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:02:30.472884 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 10:02:30.474898 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:02:30.476255 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 10:02:30.476433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 10:02:30.477980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 10:02:30.478101 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 10:02:30.478990 systemd[1]: Stopped target paths.target - Path Units. Nov 1 10:02:30.489162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 10:02:30.495443 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:02:30.496244 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 10:02:30.500890 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 10:02:30.503445 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 10:02:30.503541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 10:02:30.506460 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 10:02:30.506549 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 10:02:30.509333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 10:02:30.509468 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 10:02:30.512428 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 10:02:30.512537 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 10:02:30.520323 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 10:02:30.521092 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 10:02:30.521209 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:02:30.522585 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 10:02:30.530066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 10:02:30.530201 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:02:30.531087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 10:02:30.531198 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:02:30.540728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 10:02:30.540929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 10:02:30.554572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 10:02:30.554738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 10:02:30.580705 ignition[1120]: INFO : Ignition 2.22.0 Nov 1 10:02:30.582305 ignition[1120]: INFO : Stage: umount Nov 1 10:02:30.582305 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 10:02:30.582305 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 10:02:30.587747 ignition[1120]: INFO : umount: umount passed Nov 1 10:02:30.587747 ignition[1120]: INFO : Ignition finished successfully Nov 1 10:02:30.583620 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 10:02:30.586197 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 10:02:30.586346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 10:02:30.588878 systemd[1]: Stopped target network.target - Network. Nov 1 10:02:30.589781 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 10:02:30.589874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 10:02:30.594212 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 10:02:30.594280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 10:02:30.597179 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 10:02:30.597253 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 10:02:30.602964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 10:02:30.603040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 10:02:30.606014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 10:02:30.608937 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 10:02:30.625793 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 10:02:30.626043 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 10:02:30.631714 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 10:02:30.632406 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 10:02:30.632452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:02:30.636774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 10:02:30.639911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 10:02:30.640075 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 10:02:30.649865 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:02:30.654866 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 10:02:30.663608 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 10:02:30.668331 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 10:02:30.668466 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 10:02:30.673281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 10:02:30.673427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 10:02:30.674267 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 10:02:30.674321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:02:30.675064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 10:02:30.675113 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 10:02:30.682071 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 10:02:30.682270 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:02:30.685316 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 10:02:30.685421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 10:02:30.688239 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 10:02:30.688286 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:02:30.692488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 10:02:30.692546 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 10:02:30.697365 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 10:02:30.697436 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 10:02:30.702106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 10:02:30.702176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 10:02:30.707901 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 10:02:30.709021 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 10:02:30.709079 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:02:30.712303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 10:02:30.712390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:02:30.712783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 10:02:30.712831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:02:30.729876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 10:02:30.729990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 10:02:30.752388 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 10:02:30.752581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 10:02:30.754019 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 10:02:30.758836 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 10:02:30.788332 systemd[1]: Switching root. Nov 1 10:02:30.823683 systemd-journald[318]: Journal stopped Nov 1 10:02:32.352947 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Nov 1 10:02:32.353022 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 10:02:32.353037 kernel: SELinux: policy capability open_perms=1 Nov 1 10:02:32.353053 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 10:02:32.353066 kernel: SELinux: policy capability always_check_network=0 Nov 1 10:02:32.353177 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 10:02:32.353191 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 10:02:32.353203 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 10:02:32.353215 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 10:02:32.353226 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 10:02:32.353241 kernel: audit: type=1403 audit(1761991351.325:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 10:02:32.353263 systemd[1]: Successfully loaded SELinux policy in 67.289ms. Nov 1 10:02:32.353296 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.189ms. Nov 1 10:02:32.353319 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 10:02:32.353333 systemd[1]: Detected virtualization kvm. Nov 1 10:02:32.353346 systemd[1]: Detected architecture x86-64. Nov 1 10:02:32.353358 systemd[1]: Detected first boot. Nov 1 10:02:32.353393 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 10:02:32.353411 zram_generator::config[1166]: No configuration found. Nov 1 10:02:32.353436 kernel: Guest personality initialized and is inactive Nov 1 10:02:32.353448 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 10:02:32.353460 kernel: Initialized host personality Nov 1 10:02:32.353474 kernel: NET: Registered PF_VSOCK protocol family Nov 1 10:02:32.353487 systemd[1]: Populated /etc with preset unit settings. Nov 1 10:02:32.353499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 10:02:32.353512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 10:02:32.353533 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 10:02:32.353546 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 10:02:32.353559 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 10:02:32.353575 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 10:02:32.353591 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 10:02:32.353605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 10:02:32.353618 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 10:02:32.353639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 10:02:32.353652 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 10:02:32.353665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 10:02:32.353680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 10:02:32.353693 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 10:02:32.353706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 10:02:32.353718 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 10:02:32.353739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 10:02:32.353752 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 10:02:32.353765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 10:02:32.353777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 10:02:32.353789 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 10:02:32.353805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 10:02:32.353825 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 10:02:32.353838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 10:02:32.353850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 10:02:32.353865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 10:02:32.353880 systemd[1]: Reached target slices.target - Slice Units. Nov 1 10:02:32.353893 systemd[1]: Reached target swap.target - Swaps. Nov 1 10:02:32.353907 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 10:02:32.353927 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 10:02:32.353939 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 10:02:32.353955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 10:02:32.353967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 10:02:32.353980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 10:02:32.353993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 10:02:32.354005 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 10:02:32.354025 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 10:02:32.354038 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 10:02:32.354051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:32.354064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 10:02:32.354076 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 10:02:32.354093 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 10:02:32.354106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 10:02:32.354127 systemd[1]: Reached target machines.target - Containers. Nov 1 10:02:32.354140 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 10:02:32.354165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:02:32.354178 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 10:02:32.354193 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 10:02:32.354205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:02:32.354220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:02:32.354240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:02:32.354254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 10:02:32.354266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:02:32.354280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 10:02:32.354292 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 10:02:32.354304 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 10:02:32.354324 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 10:02:32.354337 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 10:02:32.354350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:02:32.354363 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 10:02:32.354401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 10:02:32.354415 kernel: ACPI: bus type drm_connector registered Nov 1 10:02:32.354427 kernel: fuse: init (API version 7.41) Nov 1 10:02:32.354450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 10:02:32.354463 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 10:02:32.354476 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 10:02:32.354488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 10:02:32.354509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:32.354542 systemd-journald[1244]: Collecting audit messages is disabled. Nov 1 10:02:32.354566 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 10:02:32.354589 systemd-journald[1244]: Journal started Nov 1 10:02:32.354611 systemd-journald[1244]: Runtime Journal (/run/log/journal/a2bbfc43cc08424ab038f1de13ec0eaf) is 6M, max 48.2M, 42.2M free. Nov 1 10:02:32.359545 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 10:02:32.359583 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 10:02:31.925975 systemd[1]: Queued start job for default target multi-user.target. Nov 1 10:02:31.951578 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 10:02:31.952132 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 10:02:32.364634 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 10:02:32.367882 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 10:02:32.369866 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 10:02:32.371955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 10:02:32.373937 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 10:02:32.376240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 10:02:32.378669 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 10:02:32.378992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 10:02:32.381229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:02:32.381524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:02:32.383969 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:02:32.384242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:02:32.386736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:02:32.387071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:02:32.389673 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 10:02:32.389985 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 10:02:32.392187 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:02:32.392563 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:02:32.394845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 10:02:32.397288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 10:02:32.400948 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 10:02:32.403603 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 10:02:32.427418 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 10:02:32.429694 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 10:02:32.433331 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 10:02:32.436745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 10:02:32.438709 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 10:02:32.438806 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 10:02:32.441624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 10:02:32.444169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:02:32.448008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 10:02:32.451903 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 10:02:32.453803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:02:32.461499 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 10:02:32.463641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:02:32.465551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 10:02:32.467775 systemd-journald[1244]: Time spent on flushing to /var/log/journal/a2bbfc43cc08424ab038f1de13ec0eaf is 20.867ms for 962 entries. Nov 1 10:02:32.467775 systemd-journald[1244]: System Journal (/var/log/journal/a2bbfc43cc08424ab038f1de13ec0eaf) is 8M, max 163.5M, 155.5M free. Nov 1 10:02:32.512389 systemd-journald[1244]: Received client request to flush runtime journal. Nov 1 10:02:32.512483 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 10:02:32.470007 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 10:02:32.474255 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 10:02:32.477590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 10:02:32.481095 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 10:02:32.483539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 10:02:32.486046 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 10:02:32.496246 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 10:02:32.500565 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 10:02:32.515335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 10:02:32.521794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 10:02:32.536451 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 10:02:32.540421 kernel: loop2: detected capacity change from 0 to 119080 Nov 1 10:02:32.550106 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 10:02:32.554523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 10:02:32.557612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 10:02:32.572453 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 10:02:32.574866 kernel: loop3: detected capacity change from 0 to 111544 Nov 1 10:02:32.597961 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Nov 1 10:02:32.597988 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Nov 1 10:02:32.605401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 10:02:32.612401 kernel: loop4: detected capacity change from 0 to 219144 Nov 1 10:02:32.621401 kernel: loop5: detected capacity change from 0 to 119080 Nov 1 10:02:32.629552 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 10:02:32.635921 kernel: loop6: detected capacity change from 0 to 111544 Nov 1 10:02:32.644632 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 1 10:02:32.649296 (sd-merge)[1308]: Merged extensions into '/usr'. Nov 1 10:02:32.654508 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 10:02:32.654600 systemd[1]: Reloading... Nov 1 10:02:32.737409 zram_generator::config[1341]: No configuration found. Nov 1 10:02:32.787416 systemd-resolved[1302]: Positive Trust Anchors: Nov 1 10:02:32.787894 systemd-resolved[1302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 10:02:32.787963 systemd-resolved[1302]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 10:02:32.788048 systemd-resolved[1302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 10:02:32.793039 systemd-resolved[1302]: Defaulting to hostname 'linux'. Nov 1 10:02:32.978791 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 10:02:32.979344 systemd[1]: Reloading finished in 324 ms. Nov 1 10:02:33.010925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 10:02:33.013647 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 10:02:33.019283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 10:02:33.050429 systemd[1]: Starting ensure-sysext.service... Nov 1 10:02:33.053396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 10:02:33.166699 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Nov 1 10:02:33.166726 systemd[1]: Reloading... Nov 1 10:02:33.174138 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 10:02:33.174182 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 10:02:33.174553 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 10:02:33.174849 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 10:02:33.175989 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 10:02:33.176397 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 1 10:02:33.176490 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 1 10:02:33.186201 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:02:33.186218 systemd-tmpfiles[1379]: Skipping /boot Nov 1 10:02:33.199057 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 10:02:33.199077 systemd-tmpfiles[1379]: Skipping /boot Nov 1 10:02:33.283404 zram_generator::config[1415]: No configuration found. Nov 1 10:02:33.508926 systemd[1]: Reloading finished in 341 ms. Nov 1 10:02:33.522818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 10:02:33.525429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 10:02:33.563653 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:02:33.573038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 10:02:33.584488 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 10:02:33.588104 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 10:02:33.592635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 10:02:33.595921 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 10:02:33.601603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.602289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:02:33.606440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:02:33.613578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:02:33.618441 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:02:33.620637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:02:33.620751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:02:33.620845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.628746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:02:33.630056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:02:33.633515 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.633769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:02:33.634012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:02:33.634163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:02:33.634313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.636413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:02:33.636637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:02:33.639414 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:02:33.639762 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:02:33.652496 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 10:02:33.661864 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Nov 1 10:02:33.669380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 10:02:33.681793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.682218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 10:02:33.685359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 10:02:33.689411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 10:02:33.694818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 10:02:33.700872 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 10:02:33.702545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 10:02:33.702628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 10:02:33.702749 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 10:02:33.704883 systemd[1]: Finished ensure-sysext.service. Nov 1 10:02:33.706905 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 10:02:33.707563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 10:02:33.710704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 10:02:33.711915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 10:02:33.716229 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 10:02:33.716733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 10:02:33.720098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 10:02:33.720532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 10:02:33.727949 augenrules[1486]: No rules Nov 1 10:02:33.729869 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:02:33.730206 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:02:33.737195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 10:02:33.743085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 10:02:33.745352 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 10:02:33.745456 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 10:02:33.747780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 10:02:33.774305 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 10:02:33.784045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 10:02:33.873666 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 10:02:33.909316 systemd-networkd[1500]: lo: Link UP Nov 1 10:02:33.909333 systemd-networkd[1500]: lo: Gained carrier Nov 1 10:02:33.911358 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 10:02:33.914250 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 10:02:33.917196 systemd[1]: Reached target network.target - Network. Nov 1 10:02:33.919466 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 10:02:33.923626 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 10:02:33.928462 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 10:02:33.968235 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 10:02:34.019901 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:02:34.019924 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 10:02:34.020833 systemd-networkd[1500]: eth0: Link UP Nov 1 10:02:34.021089 systemd-networkd[1500]: eth0: Gained carrier Nov 1 10:02:34.021106 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 10:02:34.030400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 10:02:34.036405 kernel: ACPI: button: Power Button [PWRF] Nov 1 10:02:34.036486 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 10:02:34.037501 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Nov 1 10:02:34.636828 systemd-resolved[1302]: Clock change detected. Flushing caches. Nov 1 10:02:34.636848 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 10:02:34.636891 systemd-timesyncd[1501]: Initial clock synchronization to Sat 2025-11-01 10:02:34.636761 UTC. Nov 1 10:02:34.642606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 10:02:34.648828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 10:02:34.654746 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 10:02:34.659744 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 10:02:34.660090 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 10:02:34.703703 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 10:02:34.924976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 10:02:35.049664 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 10:02:35.058033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 10:02:35.062957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 10:02:35.090299 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 10:02:35.103946 kernel: kvm_amd: TSC scaling supported Nov 1 10:02:35.104002 kernel: kvm_amd: Nested Virtualization enabled Nov 1 10:02:35.104017 kernel: kvm_amd: Nested Paging enabled Nov 1 10:02:35.106303 kernel: kvm_amd: LBR virtualization supported Nov 1 10:02:35.106341 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 10:02:35.107485 kernel: kvm_amd: Virtual GIF supported Nov 1 10:02:35.139728 kernel: EDAC MC: Ver: 3.0.0 Nov 1 10:02:35.224795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 10:02:35.228975 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 10:02:35.231050 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 10:02:35.233432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 10:02:35.235663 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 10:02:35.237891 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 10:02:35.239899 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 10:02:35.242197 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 10:02:35.244370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 10:02:35.244403 systemd[1]: Reached target paths.target - Path Units. Nov 1 10:02:35.245989 systemd[1]: Reached target timers.target - Timer Units. Nov 1 10:02:35.248458 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 10:02:35.251821 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 10:02:35.255523 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 10:02:35.257842 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 10:02:35.259995 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 10:02:35.264222 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 10:02:35.266253 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 10:02:35.268873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 10:02:35.271405 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 10:02:35.273053 systemd[1]: Reached target basic.target - Basic System. Nov 1 10:02:35.274712 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:02:35.274756 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 10:02:35.275805 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 10:02:35.278957 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 10:02:35.281948 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 10:02:35.291978 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 10:02:35.295555 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 10:02:35.297446 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 10:02:35.298885 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 10:02:35.303557 jq[1569]: false Nov 1 10:02:35.304376 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 10:02:35.308385 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 10:02:35.312118 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 10:02:35.316178 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Nov 1 10:02:35.316194 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Nov 1 10:02:35.317853 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 10:02:35.322745 extend-filesystems[1570]: Found /dev/vda6 Nov 1 10:02:35.326222 extend-filesystems[1570]: Found /dev/vda9 Nov 1 10:02:35.327874 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Nov 1 10:02:35.327869 oslogin_cache_refresh[1571]: Failure getting users, quitting Nov 1 10:02:35.327959 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:02:35.327959 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Nov 1 10:02:35.327897 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 10:02:35.327958 oslogin_cache_refresh[1571]: Refreshing group entry cache Nov 1 10:02:35.330171 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 10:02:35.331290 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 10:02:35.331981 extend-filesystems[1570]: Checking size of /dev/vda9 Nov 1 10:02:35.332012 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 10:02:35.333488 oslogin_cache_refresh[1571]: Failure getting groups, quitting Nov 1 10:02:35.334910 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Nov 1 10:02:35.334910 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:02:35.333499 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 10:02:35.335011 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 10:02:35.340865 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 10:02:35.348307 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 10:02:35.351729 jq[1592]: true Nov 1 10:02:35.351197 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 10:02:35.351964 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 10:02:35.352343 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 10:02:35.352619 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 10:02:35.355336 update_engine[1587]: I20251101 10:02:35.355245 1587 main.cc:92] Flatcar Update Engine starting Nov 1 10:02:35.356085 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 10:02:35.358535 extend-filesystems[1570]: Resized partition /dev/vda9 Nov 1 10:02:35.361942 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 10:02:35.365146 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 10:02:35.365408 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 10:02:35.387779 jq[1607]: true Nov 1 10:02:35.398826 extend-filesystems[1620]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 10:02:35.413821 tar[1604]: linux-amd64/LICENSE Nov 1 10:02:35.413821 tar[1604]: linux-amd64/helm Nov 1 10:02:35.414727 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 1 10:02:35.452594 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 10:02:35.453738 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 10:02:35.457596 systemd-logind[1586]: New seat seat0. Nov 1 10:02:35.468227 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 10:02:35.470762 dbus-daemon[1567]: [system] SELinux support is enabled Nov 1 10:02:35.471185 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 10:02:35.474056 update_engine[1587]: I20251101 10:02:35.474006 1587 update_check_scheduler.cc:74] Next update check in 9m10s Nov 1 10:02:35.477145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 10:02:35.477171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 10:02:35.477731 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 1 10:02:35.480885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 10:02:35.480907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 10:02:35.483840 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 10:02:35.487082 systemd[1]: Started update-engine.service - Update Engine. Nov 1 10:02:35.631290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 10:02:35.650122 extend-filesystems[1620]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 10:02:35.650122 extend-filesystems[1620]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 10:02:35.650122 extend-filesystems[1620]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 1 10:02:35.657042 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Nov 1 10:02:35.658605 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Nov 1 10:02:35.662087 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 10:02:35.662390 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 10:02:35.665031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 10:02:35.667935 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 10:02:35.669159 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 10:02:35.726781 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 10:02:35.728963 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 10:02:35.733664 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 10:02:35.757454 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 10:02:35.757763 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 10:02:35.761678 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 10:02:35.792706 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 10:02:35.810719 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 10:02:35.818009 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 10:02:35.820111 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 10:02:36.021154 containerd[1608]: time="2025-11-01T10:02:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 10:02:36.021976 containerd[1608]: time="2025-11-01T10:02:36.021679777Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 1 10:02:36.088054 containerd[1608]: time="2025-11-01T10:02:36.087969681Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.974µs" Nov 1 10:02:36.088054 containerd[1608]: time="2025-11-01T10:02:36.088036626Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 10:02:36.088197 containerd[1608]: time="2025-11-01T10:02:36.088093182Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 10:02:36.088197 containerd[1608]: time="2025-11-01T10:02:36.088109994Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 10:02:36.088353 containerd[1608]: time="2025-11-01T10:02:36.088331880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 10:02:36.088401 containerd[1608]: time="2025-11-01T10:02:36.088351998Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088460 containerd[1608]: time="2025-11-01T10:02:36.088437588Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088460 containerd[1608]: time="2025-11-01T10:02:36.088454099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088778 containerd[1608]: time="2025-11-01T10:02:36.088752949Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088778 containerd[1608]: time="2025-11-01T10:02:36.088769741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088837 containerd[1608]: time="2025-11-01T10:02:36.088781313Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 10:02:36.088837 containerd[1608]: time="2025-11-01T10:02:36.088789768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.089078 containerd[1608]: time="2025-11-01T10:02:36.089055096Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.089078 containerd[1608]: time="2025-11-01T10:02:36.089071627Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 10:02:36.089259 containerd[1608]: time="2025-11-01T10:02:36.089240163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.090581 containerd[1608]: time="2025-11-01T10:02:36.089642097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.090632 containerd[1608]: time="2025-11-01T10:02:36.090607757Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 10:02:36.090632 containerd[1608]: time="2025-11-01T10:02:36.090619289Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 10:02:36.090682 containerd[1608]: time="2025-11-01T10:02:36.090659063Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 10:02:36.090929 containerd[1608]: time="2025-11-01T10:02:36.090907209Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 10:02:36.091129 containerd[1608]: time="2025-11-01T10:02:36.091077408Z" level=info msg="metadata content store policy set" policy=shared Nov 1 10:02:36.098230 containerd[1608]: time="2025-11-01T10:02:36.098179144Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 10:02:36.098278 containerd[1608]: time="2025-11-01T10:02:36.098242743Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:02:36.098409 containerd[1608]: time="2025-11-01T10:02:36.098373819Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 1 10:02:36.098409 containerd[1608]: time="2025-11-01T10:02:36.098402041Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 10:02:36.098455 containerd[1608]: time="2025-11-01T10:02:36.098415988Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 10:02:36.098455 containerd[1608]: time="2025-11-01T10:02:36.098428711Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 10:02:36.098455 containerd[1608]: time="2025-11-01T10:02:36.098441135Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 10:02:36.098455 containerd[1608]: time="2025-11-01T10:02:36.098452636Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 10:02:36.098539 containerd[1608]: time="2025-11-01T10:02:36.098466653Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 10:02:36.098539 containerd[1608]: time="2025-11-01T10:02:36.098480499Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 10:02:36.098539 containerd[1608]: time="2025-11-01T10:02:36.098492221Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 10:02:36.098539 containerd[1608]: time="2025-11-01T10:02:36.098503211Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 10:02:36.098539 containerd[1608]: time="2025-11-01T10:02:36.098533067Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 10:02:36.098631 containerd[1608]: time="2025-11-01T10:02:36.098559086Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 10:02:36.098787 containerd[1608]: time="2025-11-01T10:02:36.098758870Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 10:02:36.098820 containerd[1608]: time="2025-11-01T10:02:36.098792423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 10:02:36.098820 containerd[1608]: time="2025-11-01T10:02:36.098812321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 10:02:36.098864 containerd[1608]: time="2025-11-01T10:02:36.098829323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 10:02:36.098864 containerd[1608]: time="2025-11-01T10:02:36.098840844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 10:02:36.098864 containerd[1608]: time="2025-11-01T10:02:36.098852055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 10:02:36.098924 containerd[1608]: time="2025-11-01T10:02:36.098866061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 10:02:36.098924 containerd[1608]: time="2025-11-01T10:02:36.098877803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 10:02:36.098924 containerd[1608]: time="2025-11-01T10:02:36.098889836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 10:02:36.098924 containerd[1608]: time="2025-11-01T10:02:36.098900787Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 10:02:36.098924 containerd[1608]: time="2025-11-01T10:02:36.098911907Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 10:02:36.099022 containerd[1608]: time="2025-11-01T10:02:36.098948346Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 10:02:36.099022 containerd[1608]: time="2025-11-01T10:02:36.099005473Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 10:02:36.099068 containerd[1608]: time="2025-11-01T10:02:36.099030730Z" level=info msg="Start snapshots syncer" Nov 1 10:02:36.099089 containerd[1608]: time="2025-11-01T10:02:36.099077458Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 10:02:36.099407 containerd[1608]: time="2025-11-01T10:02:36.099358485Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099429508Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099501052Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099607371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099627459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099637849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099647407Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099661152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099671772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099684015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099712078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099727907Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099760759Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099772211Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 10:02:36.099988 containerd[1608]: time="2025-11-01T10:02:36.099780757Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099804832Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099814630Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099843234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099856058Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099886034Z" level=info msg="runtime interface created" Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099891594Z" level=info msg="created NRI interface" Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099900190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.099910610Z" level=info msg="Connect containerd service" Nov 1 10:02:36.100449 containerd[1608]: time="2025-11-01T10:02:36.100056203Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 10:02:36.101125 containerd[1608]: time="2025-11-01T10:02:36.101091765Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 10:02:36.184895 tar[1604]: linux-amd64/README.md Nov 1 10:02:36.240017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 10:02:36.321776 containerd[1608]: time="2025-11-01T10:02:36.321616849Z" level=info msg="Start subscribing containerd event" Nov 1 10:02:36.321895 containerd[1608]: time="2025-11-01T10:02:36.321822545Z" level=info msg="Start recovering state" Nov 1 10:02:36.322084 containerd[1608]: time="2025-11-01T10:02:36.322051023Z" level=info msg="Start event monitor" Nov 1 10:02:36.322127 containerd[1608]: time="2025-11-01T10:02:36.322065290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 10:02:36.322191 containerd[1608]: time="2025-11-01T10:02:36.322164035Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 10:02:36.322284 containerd[1608]: time="2025-11-01T10:02:36.322086450Z" level=info msg="Start cni network conf syncer for default" Nov 1 10:02:36.322317 containerd[1608]: time="2025-11-01T10:02:36.322286445Z" level=info msg="Start streaming server" Nov 1 10:02:36.322345 containerd[1608]: time="2025-11-01T10:02:36.322322993Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 10:02:36.322372 containerd[1608]: time="2025-11-01T10:02:36.322339534Z" level=info msg="runtime interface starting up..." Nov 1 10:02:36.322372 containerd[1608]: time="2025-11-01T10:02:36.322364070Z" level=info msg="starting plugins..." Nov 1 10:02:36.322437 containerd[1608]: time="2025-11-01T10:02:36.322412561Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 10:02:36.322773 containerd[1608]: time="2025-11-01T10:02:36.322739745Z" level=info msg="containerd successfully booted in 0.302289s" Nov 1 10:02:36.322972 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 10:02:36.371277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 10:02:36.374437 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Nov 1 10:02:36.463944 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:36.466167 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:36.473861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 10:02:36.477072 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 10:02:36.486103 systemd-logind[1586]: New session 1 of user core. Nov 1 10:02:36.503943 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 10:02:36.509719 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 10:02:36.529511 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 10:02:36.532434 systemd-logind[1586]: New session c1 of user core. Nov 1 10:02:36.676959 systemd-networkd[1500]: eth0: Gained IPv6LL Nov 1 10:02:36.680153 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 10:02:36.683374 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 10:02:36.687419 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 10:02:36.690921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:02:36.693988 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 10:02:36.723902 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 10:02:36.796415 systemd[1692]: Queued start job for default target default.target. Nov 1 10:02:36.937464 systemd[1692]: Created slice app.slice - User Application Slice. Nov 1 10:02:36.937494 systemd[1692]: Reached target paths.target - Paths. Nov 1 10:02:36.937565 systemd[1692]: Reached target timers.target - Timers. Nov 1 10:02:36.939281 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 10:02:36.940774 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 10:02:36.941074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 10:02:36.943657 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 10:02:36.953321 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 10:02:36.953506 systemd[1692]: Reached target sockets.target - Sockets. Nov 1 10:02:36.953548 systemd[1692]: Reached target basic.target - Basic System. Nov 1 10:02:36.953608 systemd[1692]: Reached target default.target - Main User Target. Nov 1 10:02:36.953644 systemd[1692]: Startup finished in 338ms. Nov 1 10:02:36.954286 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 10:02:36.970942 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 10:02:37.012041 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:55500.service - OpenSSH per-connection server daemon (10.0.0.1:55500). Nov 1 10:02:37.133444 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 55500 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:37.135034 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:37.140122 systemd-logind[1586]: New session 2 of user core. Nov 1 10:02:37.149826 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 10:02:37.166589 sshd[1725]: Connection closed by 10.0.0.1 port 55500 Nov 1 10:02:37.166887 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:37.179112 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:55500.service: Deactivated successfully. Nov 1 10:02:37.181058 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 10:02:37.181785 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Nov 1 10:02:37.184613 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:55504.service - OpenSSH per-connection server daemon (10.0.0.1:55504). Nov 1 10:02:37.188177 systemd-logind[1586]: Removed session 2. Nov 1 10:02:37.263998 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 55504 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:37.265972 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:37.271402 systemd-logind[1586]: New session 3 of user core. Nov 1 10:02:37.278887 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 10:02:37.296783 sshd[1734]: Connection closed by 10.0.0.1 port 55504 Nov 1 10:02:37.299611 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:37.304211 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:55504.service: Deactivated successfully. Nov 1 10:02:37.306201 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 10:02:37.306974 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Nov 1 10:02:37.308192 systemd-logind[1586]: Removed session 3. Nov 1 10:02:38.168635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:02:38.171543 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 10:02:38.173854 systemd[1]: Startup finished in 3.407s (kernel) + 7.279s (initrd) + 6.314s (userspace) = 17.001s. Nov 1 10:02:38.184052 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:02:38.710563 kubelet[1744]: E1101 10:02:38.710464 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:02:38.714759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:02:38.714972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:02:38.715380 systemd[1]: kubelet.service: Consumed 1.822s CPU time, 255.7M memory peak. Nov 1 10:02:47.315531 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:59078.service - OpenSSH per-connection server daemon (10.0.0.1:59078). Nov 1 10:02:47.391527 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 59078 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:47.393465 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:47.399388 systemd-logind[1586]: New session 4 of user core. Nov 1 10:02:47.406841 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 10:02:47.423270 sshd[1760]: Connection closed by 10.0.0.1 port 59078 Nov 1 10:02:47.423667 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:47.443824 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:59078.service: Deactivated successfully. Nov 1 10:02:47.446199 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 10:02:47.447109 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Nov 1 10:02:47.450675 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:59090.service - OpenSSH per-connection server daemon (10.0.0.1:59090). Nov 1 10:02:47.451460 systemd-logind[1586]: Removed session 4. Nov 1 10:02:47.515320 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 59090 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:47.517104 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:47.521871 systemd-logind[1586]: New session 5 of user core. Nov 1 10:02:47.529807 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 10:02:47.540622 sshd[1769]: Connection closed by 10.0.0.1 port 59090 Nov 1 10:02:47.541009 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:47.554346 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:59090.service: Deactivated successfully. Nov 1 10:02:47.555955 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 10:02:47.556671 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Nov 1 10:02:47.559110 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:59104.service - OpenSSH per-connection server daemon (10.0.0.1:59104). Nov 1 10:02:47.559653 systemd-logind[1586]: Removed session 5. Nov 1 10:02:47.634387 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 59104 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:47.635765 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:47.640355 systemd-logind[1586]: New session 6 of user core. Nov 1 10:02:47.658819 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 10:02:47.674318 sshd[1778]: Connection closed by 10.0.0.1 port 59104 Nov 1 10:02:47.674599 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:47.687441 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:59104.service: Deactivated successfully. Nov 1 10:02:47.689253 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 10:02:47.689990 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Nov 1 10:02:47.692762 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:59112.service - OpenSSH per-connection server daemon (10.0.0.1:59112). Nov 1 10:02:47.693351 systemd-logind[1586]: Removed session 6. Nov 1 10:02:47.759726 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 59112 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:47.761162 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:47.766698 systemd-logind[1586]: New session 7 of user core. Nov 1 10:02:47.777836 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 10:02:47.807071 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 10:02:47.807404 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:02:47.829371 sudo[1788]: pam_unix(sudo:session): session closed for user root Nov 1 10:02:47.831280 sshd[1787]: Connection closed by 10.0.0.1 port 59112 Nov 1 10:02:47.831591 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:47.844384 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:59112.service: Deactivated successfully. Nov 1 10:02:47.846218 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 10:02:47.846977 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Nov 1 10:02:47.849644 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:59126.service - OpenSSH per-connection server daemon (10.0.0.1:59126). Nov 1 10:02:47.850409 systemd-logind[1586]: Removed session 7. Nov 1 10:02:47.905013 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 59126 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:47.906436 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:47.910915 systemd-logind[1586]: New session 8 of user core. Nov 1 10:02:47.925853 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 10:02:47.942443 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 10:02:47.943406 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:02:47.950021 sudo[1799]: pam_unix(sudo:session): session closed for user root Nov 1 10:02:47.958203 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 10:02:47.958524 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:02:47.968790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 10:02:48.013485 augenrules[1821]: No rules Nov 1 10:02:48.015231 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 10:02:48.015562 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 10:02:48.016731 sudo[1798]: pam_unix(sudo:session): session closed for user root Nov 1 10:02:48.018703 sshd[1797]: Connection closed by 10.0.0.1 port 59126 Nov 1 10:02:48.019153 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Nov 1 10:02:48.030261 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:59126.service: Deactivated successfully. Nov 1 10:02:48.032733 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 10:02:48.033496 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Nov 1 10:02:48.037198 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:59142.service - OpenSSH per-connection server daemon (10.0.0.1:59142). Nov 1 10:02:48.037964 systemd-logind[1586]: Removed session 8. Nov 1 10:02:48.104650 sshd[1830]: Accepted publickey for core from 10.0.0.1 port 59142 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:02:48.105974 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:02:48.110594 systemd-logind[1586]: New session 9 of user core. Nov 1 10:02:48.131847 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 10:02:48.146339 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 10:02:48.146685 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 10:02:48.719294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 10:02:48.722958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:02:49.020588 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 10:02:49.040977 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 10:02:49.045121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:02:49.067154 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:02:49.172286 kubelet[1863]: E1101 10:02:49.172202 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:02:49.180486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:02:49.180734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:02:49.181191 systemd[1]: kubelet.service: Consumed 383ms CPU time, 111.1M memory peak. Nov 1 10:02:49.700999 dockerd[1859]: time="2025-11-01T10:02:49.700892548Z" level=info msg="Starting up" Nov 1 10:02:49.702143 dockerd[1859]: time="2025-11-01T10:02:49.701846707Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 10:02:49.724737 dockerd[1859]: time="2025-11-01T10:02:49.724638291Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 10:02:50.021088 systemd[1]: var-lib-docker-metacopy\x2dcheck961868138-merged.mount: Deactivated successfully. Nov 1 10:02:50.052026 dockerd[1859]: time="2025-11-01T10:02:50.051962053Z" level=info msg="Loading containers: start." Nov 1 10:02:50.065727 kernel: Initializing XFRM netlink socket Nov 1 10:02:50.367476 systemd-networkd[1500]: docker0: Link UP Nov 1 10:02:50.373425 dockerd[1859]: time="2025-11-01T10:02:50.373385151Z" level=info msg="Loading containers: done." Nov 1 10:02:50.394070 dockerd[1859]: time="2025-11-01T10:02:50.393998491Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 10:02:50.394234 dockerd[1859]: time="2025-11-01T10:02:50.394144284Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 10:02:50.394430 dockerd[1859]: time="2025-11-01T10:02:50.394402178Z" level=info msg="Initializing buildkit" Nov 1 10:02:50.426515 dockerd[1859]: time="2025-11-01T10:02:50.426455378Z" level=info msg="Completed buildkit initialization" Nov 1 10:02:50.433324 dockerd[1859]: time="2025-11-01T10:02:50.433268733Z" level=info msg="Daemon has completed initialization" Nov 1 10:02:50.433452 dockerd[1859]: time="2025-11-01T10:02:50.433370754Z" level=info msg="API listen on /run/docker.sock" Nov 1 10:02:50.433596 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 10:02:51.010020 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2141410783-merged.mount: Deactivated successfully. Nov 1 10:02:51.376633 containerd[1608]: time="2025-11-01T10:02:51.376337236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 10:02:52.071995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176612206.mount: Deactivated successfully. Nov 1 10:02:53.398574 containerd[1608]: time="2025-11-01T10:02:53.398490893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:53.399340 containerd[1608]: time="2025-11-01T10:02:53.399296815Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Nov 1 10:02:53.400833 containerd[1608]: time="2025-11-01T10:02:53.400767973Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:53.403316 containerd[1608]: time="2025-11-01T10:02:53.403280203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:53.404200 containerd[1608]: time="2025-11-01T10:02:53.404168359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.027748047s" Nov 1 10:02:53.404263 containerd[1608]: time="2025-11-01T10:02:53.404208975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 10:02:53.405106 containerd[1608]: time="2025-11-01T10:02:53.404933754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 10:02:54.822727 containerd[1608]: time="2025-11-01T10:02:54.822645814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:54.823602 containerd[1608]: time="2025-11-01T10:02:54.823529140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 1 10:02:54.824725 containerd[1608]: time="2025-11-01T10:02:54.824663497Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:54.827503 containerd[1608]: time="2025-11-01T10:02:54.827454219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:54.828746 containerd[1608]: time="2025-11-01T10:02:54.828709493Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.423724884s" Nov 1 10:02:54.828746 containerd[1608]: time="2025-11-01T10:02:54.828745080Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 10:02:54.829349 containerd[1608]: time="2025-11-01T10:02:54.829286485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 10:02:55.954888 containerd[1608]: time="2025-11-01T10:02:55.954795033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:55.955729 containerd[1608]: time="2025-11-01T10:02:55.955664644Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 1 10:02:55.957218 containerd[1608]: time="2025-11-01T10:02:55.957184563Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:55.959857 containerd[1608]: time="2025-11-01T10:02:55.959783206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:55.960643 containerd[1608]: time="2025-11-01T10:02:55.960609716Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.131259932s" Nov 1 10:02:55.960643 containerd[1608]: time="2025-11-01T10:02:55.960641225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 10:02:55.961225 containerd[1608]: time="2025-11-01T10:02:55.961189382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 10:02:57.929070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663034920.mount: Deactivated successfully. Nov 1 10:02:58.289993 containerd[1608]: time="2025-11-01T10:02:58.289923889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:58.290649 containerd[1608]: time="2025-11-01T10:02:58.290619052Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25961619" Nov 1 10:02:58.291851 containerd[1608]: time="2025-11-01T10:02:58.291806990Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:58.293592 containerd[1608]: time="2025-11-01T10:02:58.293563964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:02:58.294108 containerd[1608]: time="2025-11-01T10:02:58.294074612Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.332849702s" Nov 1 10:02:58.294158 containerd[1608]: time="2025-11-01T10:02:58.294106511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 10:02:58.294602 containerd[1608]: time="2025-11-01T10:02:58.294566895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 10:02:58.929370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547917619.mount: Deactivated successfully. Nov 1 10:02:59.219215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 10:02:59.222828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:02:59.629615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:02:59.634532 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 10:02:59.881997 kubelet[2190]: E1101 10:02:59.881820 2190 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 10:02:59.886355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 10:02:59.886565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 10:02:59.887028 systemd[1]: kubelet.service: Consumed 455ms CPU time, 110.9M memory peak. Nov 1 10:03:00.648561 containerd[1608]: time="2025-11-01T10:03:00.648487676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:00.649441 containerd[1608]: time="2025-11-01T10:03:00.649410666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21821241" Nov 1 10:03:00.650762 containerd[1608]: time="2025-11-01T10:03:00.650717256Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:00.653709 containerd[1608]: time="2025-11-01T10:03:00.653651227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:00.655256 containerd[1608]: time="2025-11-01T10:03:00.655122616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.360525194s" Nov 1 10:03:00.655310 containerd[1608]: time="2025-11-01T10:03:00.655257429Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 10:03:00.655790 containerd[1608]: time="2025-11-01T10:03:00.655767686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 10:03:01.336402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297009942.mount: Deactivated successfully. Nov 1 10:03:01.342030 containerd[1608]: time="2025-11-01T10:03:01.341941541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:01.343043 containerd[1608]: time="2025-11-01T10:03:01.342968286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 1 10:03:01.344298 containerd[1608]: time="2025-11-01T10:03:01.344234781Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:01.346963 containerd[1608]: time="2025-11-01T10:03:01.346907232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:01.347966 containerd[1608]: time="2025-11-01T10:03:01.347918007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 692.12257ms" Nov 1 10:03:01.348023 containerd[1608]: time="2025-11-01T10:03:01.347970836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 10:03:01.348548 containerd[1608]: time="2025-11-01T10:03:01.348513173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 10:03:05.295491 containerd[1608]: time="2025-11-01T10:03:05.295417648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:05.296242 containerd[1608]: time="2025-11-01T10:03:05.296178865Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Nov 1 10:03:05.297441 containerd[1608]: time="2025-11-01T10:03:05.297398352Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:05.300382 containerd[1608]: time="2025-11-01T10:03:05.300329167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:05.301320 containerd[1608]: time="2025-11-01T10:03:05.301280481Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.952738825s" Nov 1 10:03:05.301320 containerd[1608]: time="2025-11-01T10:03:05.301318672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 10:03:08.633361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:03:08.633546 systemd[1]: kubelet.service: Consumed 455ms CPU time, 110.9M memory peak. Nov 1 10:03:08.635923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:03:08.667365 systemd[1]: Reload requested from client PID 2315 ('systemctl') (unit session-9.scope)... Nov 1 10:03:08.667397 systemd[1]: Reloading... Nov 1 10:03:08.787733 zram_generator::config[2359]: No configuration found. Nov 1 10:03:09.290611 systemd[1]: Reloading finished in 622 ms. Nov 1 10:03:09.365380 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 10:03:09.365485 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 10:03:09.365854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:03:09.365902 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.1M memory peak. Nov 1 10:03:09.367595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:03:09.550842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:03:09.556157 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:03:09.601046 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:03:09.601046 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:03:09.601385 kubelet[2407]: I1101 10:03:09.601137 2407 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:03:10.018129 kubelet[2407]: I1101 10:03:10.018088 2407 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 10:03:10.018129 kubelet[2407]: I1101 10:03:10.018120 2407 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:03:10.018262 kubelet[2407]: I1101 10:03:10.018157 2407 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 10:03:10.018262 kubelet[2407]: I1101 10:03:10.018169 2407 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:03:10.018484 kubelet[2407]: I1101 10:03:10.018461 2407 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:03:10.569262 kubelet[2407]: I1101 10:03:10.569090 2407 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:03:10.569448 kubelet[2407]: E1101 10:03:10.569412 2407 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 10:03:10.573287 kubelet[2407]: I1101 10:03:10.573262 2407 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:03:10.579126 kubelet[2407]: I1101 10:03:10.579089 2407 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 10:03:10.580504 kubelet[2407]: I1101 10:03:10.580456 2407 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:03:10.580684 kubelet[2407]: I1101 10:03:10.580494 2407 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:03:10.580866 kubelet[2407]: I1101 10:03:10.580713 2407 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:03:10.580866 kubelet[2407]: I1101 10:03:10.580725 2407 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 10:03:10.580914 kubelet[2407]: I1101 10:03:10.580872 2407 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 10:03:10.584698 kubelet[2407]: I1101 10:03:10.584646 2407 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:03:10.584920 kubelet[2407]: I1101 10:03:10.584894 2407 kubelet.go:475] "Attempting to sync node with API server" Nov 1 10:03:10.584920 kubelet[2407]: I1101 10:03:10.584918 2407 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:03:10.584983 kubelet[2407]: I1101 10:03:10.584954 2407 kubelet.go:387] "Adding apiserver pod source" Nov 1 10:03:10.584983 kubelet[2407]: I1101 10:03:10.584984 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:03:10.585558 kubelet[2407]: E1101 10:03:10.585522 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:03:10.585938 kubelet[2407]: E1101 10:03:10.585898 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:03:10.587904 kubelet[2407]: I1101 10:03:10.587882 2407 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:03:10.588437 kubelet[2407]: I1101 10:03:10.588412 2407 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:03:10.588475 kubelet[2407]: I1101 10:03:10.588443 2407 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 10:03:10.588529 kubelet[2407]: W1101 10:03:10.588516 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 10:03:10.592300 kubelet[2407]: I1101 10:03:10.592278 2407 server.go:1262] "Started kubelet" Nov 1 10:03:10.592371 kubelet[2407]: I1101 10:03:10.592350 2407 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:03:10.592914 kubelet[2407]: I1101 10:03:10.592798 2407 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:03:10.592914 kubelet[2407]: I1101 10:03:10.592884 2407 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 10:03:10.593469 kubelet[2407]: I1101 10:03:10.593450 2407 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:03:10.597721 kubelet[2407]: I1101 10:03:10.597679 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:03:10.598051 kubelet[2407]: I1101 10:03:10.598015 2407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:03:10.600355 kubelet[2407]: E1101 10:03:10.599170 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873d9d5958c2c52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 10:03:10.592248914 +0000 UTC m=+1.031891688,LastTimestamp:2025-11-01 10:03:10.592248914 +0000 UTC m=+1.031891688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 10:03:10.602492 kubelet[2407]: E1101 10:03:10.602028 2407 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:03:10.602492 kubelet[2407]: I1101 10:03:10.602104 2407 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 10:03:10.602492 kubelet[2407]: I1101 10:03:10.602486 2407 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 10:03:10.602912 kubelet[2407]: I1101 10:03:10.602623 2407 reconciler.go:29] "Reconciler: start to sync state" Nov 1 10:03:10.605734 kubelet[2407]: E1101 10:03:10.603338 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 10:03:10.605734 kubelet[2407]: I1101 10:03:10.603588 2407 server.go:310] "Adding debug handlers to kubelet server" Nov 1 10:03:10.699436 kubelet[2407]: E1101 10:03:10.699019 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Nov 1 10:03:10.699436 kubelet[2407]: E1101 10:03:10.699353 2407 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:03:10.702161 kubelet[2407]: E1101 10:03:10.702129 2407 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:03:10.703064 kubelet[2407]: I1101 10:03:10.703044 2407 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:03:10.703064 kubelet[2407]: I1101 10:03:10.703060 2407 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:03:10.703146 kubelet[2407]: I1101 10:03:10.703136 2407 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:03:10.715709 kubelet[2407]: I1101 10:03:10.715608 2407 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 10:03:10.717325 kubelet[2407]: I1101 10:03:10.717278 2407 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 10:03:10.717325 kubelet[2407]: I1101 10:03:10.717315 2407 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 10:03:10.717434 kubelet[2407]: I1101 10:03:10.717347 2407 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 10:03:10.717434 kubelet[2407]: E1101 10:03:10.717387 2407 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:03:10.721813 kubelet[2407]: E1101 10:03:10.721779 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 10:03:10.722348 kubelet[2407]: I1101 10:03:10.722328 2407 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:03:10.722348 kubelet[2407]: I1101 10:03:10.722345 2407 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:03:10.722446 kubelet[2407]: I1101 10:03:10.722375 2407 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:03:10.725226 kubelet[2407]: I1101 10:03:10.725197 2407 policy_none.go:49] "None policy: Start" Nov 1 10:03:10.725226 kubelet[2407]: I1101 10:03:10.725227 2407 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 10:03:10.725330 kubelet[2407]: I1101 10:03:10.725241 2407 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 10:03:10.727421 kubelet[2407]: I1101 10:03:10.727400 2407 policy_none.go:47] "Start" Nov 1 10:03:10.732136 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 10:03:10.748942 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 10:03:10.752627 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 10:03:10.772792 kubelet[2407]: E1101 10:03:10.772762 2407 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:03:10.773000 kubelet[2407]: I1101 10:03:10.772974 2407 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:03:10.773033 kubelet[2407]: I1101 10:03:10.772989 2407 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:03:10.774208 kubelet[2407]: E1101 10:03:10.774076 2407 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:03:10.774208 kubelet[2407]: E1101 10:03:10.774154 2407 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 10:03:10.899874 kubelet[2407]: E1101 10:03:10.899652 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Nov 1 10:03:10.958558 kubelet[2407]: I1101 10:03:10.875055 2407 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:10.958558 kubelet[2407]: I1101 10:03:10.958362 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:10.958909 kubelet[2407]: E1101 10:03:10.958727 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 1 10:03:10.960505 kubelet[2407]: I1101 10:03:10.960474 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:03:10.967978 systemd[1]: Created slice kubepods-burstable-podc1ec20e517aba91a26cdeca58a533303.slice - libcontainer container kubepods-burstable-podc1ec20e517aba91a26cdeca58a533303.slice. Nov 1 10:03:10.996160 kubelet[2407]: E1101 10:03:10.996115 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:10.999223 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 1 10:03:11.010999 kubelet[2407]: E1101 10:03:11.010960 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:11.013858 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 1 10:03:11.015479 kubelet[2407]: E1101 10:03:11.015453 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:11.058529 kubelet[2407]: I1101 10:03:11.058505 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:11.058573 kubelet[2407]: I1101 10:03:11.058533 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:11.058573 kubelet[2407]: I1101 10:03:11.058550 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:11.058573 kubelet[2407]: I1101 10:03:11.058564 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:11.058648 kubelet[2407]: I1101 10:03:11.058580 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:11.058648 kubelet[2407]: I1101 10:03:11.058596 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:11.058648 kubelet[2407]: I1101 10:03:11.058616 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:11.058744 kubelet[2407]: I1101 10:03:11.058663 2407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:11.160903 kubelet[2407]: I1101 10:03:11.160775 2407 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:11.161140 kubelet[2407]: E1101 10:03:11.161115 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 1 10:03:11.299758 kubelet[2407]: E1101 10:03:11.299723 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:11.300060 kubelet[2407]: E1101 10:03:11.300010 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Nov 1 10:03:11.300513 containerd[1608]: time="2025-11-01T10:03:11.300476484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1ec20e517aba91a26cdeca58a533303,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:11.314067 kubelet[2407]: E1101 10:03:11.314020 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:11.314359 containerd[1608]: time="2025-11-01T10:03:11.314330724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:11.318660 kubelet[2407]: E1101 10:03:11.318635 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:11.318959 containerd[1608]: time="2025-11-01T10:03:11.318932416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:11.543841 kubelet[2407]: E1101 10:03:11.543780 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 10:03:11.563250 kubelet[2407]: I1101 10:03:11.563196 2407 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:11.563655 kubelet[2407]: E1101 10:03:11.563611 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 1 10:03:11.589920 kubelet[2407]: E1101 10:03:11.589851 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 10:03:11.755372 kubelet[2407]: E1101 10:03:11.755317 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 10:03:11.837853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824723062.mount: Deactivated successfully. Nov 1 10:03:11.843846 containerd[1608]: time="2025-11-01T10:03:11.843790336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:03:11.845777 containerd[1608]: time="2025-11-01T10:03:11.845707324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:03:11.848771 containerd[1608]: time="2025-11-01T10:03:11.848727819Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:03:11.849576 containerd[1608]: time="2025-11-01T10:03:11.849540700Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:03:11.851414 containerd[1608]: time="2025-11-01T10:03:11.851338862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:03:11.852404 containerd[1608]: time="2025-11-01T10:03:11.852325134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:03:11.853036 kubelet[2407]: E1101 10:03:11.852991 2407 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 10:03:11.853245 containerd[1608]: time="2025-11-01T10:03:11.853222617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 1 10:03:11.854227 containerd[1608]: time="2025-11-01T10:03:11.854191476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 10:03:11.854998 containerd[1608]: time="2025-11-01T10:03:11.854959883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.667004ms" Nov 1 10:03:11.857558 containerd[1608]: time="2025-11-01T10:03:11.857533514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 536.681475ms" Nov 1 10:03:11.860217 containerd[1608]: time="2025-11-01T10:03:11.860180937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 556.446846ms" Nov 1 10:03:11.920310 containerd[1608]: time="2025-11-01T10:03:11.920253134Z" level=info msg="connecting to shim d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc" address="unix:///run/containerd/s/07c3f8b0238d0edc92168de048c51d18c1c7c32de7b3e2490db44f3591c50f39" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:11.921734 containerd[1608]: time="2025-11-01T10:03:11.921668344Z" level=info msg="connecting to shim 883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6" address="unix:///run/containerd/s/4f4ac84da37e8bcaea3af663373fa3b0345111f9e060323f4af452cf65f305b9" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:11.927719 containerd[1608]: time="2025-11-01T10:03:11.927642246Z" level=info msg="connecting to shim 8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65" address="unix:///run/containerd/s/b4ed2731ca95c4a7ea5e94dbd97a97fb1c7c7ff68700b5abeca2aa710bf54bfc" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:11.995496 systemd[1]: Started cri-containerd-d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc.scope - libcontainer container d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc. Nov 1 10:03:12.008312 systemd[1]: Started cri-containerd-883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6.scope - libcontainer container 883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6. Nov 1 10:03:12.013854 systemd[1]: Started cri-containerd-8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65.scope - libcontainer container 8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65. Nov 1 10:03:12.101889 kubelet[2407]: E1101 10:03:12.101171 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Nov 1 10:03:12.155607 containerd[1608]: time="2025-11-01T10:03:12.154184006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc\"" Nov 1 10:03:12.156316 kubelet[2407]: E1101 10:03:12.156282 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:12.166506 containerd[1608]: time="2025-11-01T10:03:12.166462443Z" level=info msg="CreateContainer within sandbox \"d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 10:03:12.167446 containerd[1608]: time="2025-11-01T10:03:12.167409068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6\"" Nov 1 10:03:12.168612 kubelet[2407]: E1101 10:03:12.168583 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:12.173437 containerd[1608]: time="2025-11-01T10:03:12.173382486Z" level=info msg="CreateContainer within sandbox \"883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 10:03:12.180928 containerd[1608]: time="2025-11-01T10:03:12.180881202Z" level=info msg="Container b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:12.182829 containerd[1608]: time="2025-11-01T10:03:12.182801722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1ec20e517aba91a26cdeca58a533303,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65\"" Nov 1 10:03:12.183615 kubelet[2407]: E1101 10:03:12.183590 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:12.187420 containerd[1608]: time="2025-11-01T10:03:12.187386012Z" level=info msg="Container 56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:12.187982 containerd[1608]: time="2025-11-01T10:03:12.187950708Z" level=info msg="CreateContainer within sandbox \"8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 10:03:12.193424 containerd[1608]: time="2025-11-01T10:03:12.193370351Z" level=info msg="CreateContainer within sandbox \"d0a9a7762378a9afca081032b40bf929ab4d00f4044c499e537f96cee373a3dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5\"" Nov 1 10:03:12.194098 containerd[1608]: time="2025-11-01T10:03:12.194046329Z" level=info msg="StartContainer for \"b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5\"" Nov 1 10:03:12.195491 containerd[1608]: time="2025-11-01T10:03:12.195440087Z" level=info msg="connecting to shim b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5" address="unix:///run/containerd/s/07c3f8b0238d0edc92168de048c51d18c1c7c32de7b3e2490db44f3591c50f39" protocol=ttrpc version=3 Nov 1 10:03:12.199564 containerd[1608]: time="2025-11-01T10:03:12.199524452Z" level=info msg="CreateContainer within sandbox \"883f552b485b048d7b7ce482d65f9741d209e8128805f222852ad890a782d6c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca\"" Nov 1 10:03:12.200172 containerd[1608]: time="2025-11-01T10:03:12.200112333Z" level=info msg="StartContainer for \"56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca\"" Nov 1 10:03:12.201563 containerd[1608]: time="2025-11-01T10:03:12.201529074Z" level=info msg="connecting to shim 56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca" address="unix:///run/containerd/s/4f4ac84da37e8bcaea3af663373fa3b0345111f9e060323f4af452cf65f305b9" protocol=ttrpc version=3 Nov 1 10:03:12.204964 containerd[1608]: time="2025-11-01T10:03:12.204898888Z" level=info msg="Container 1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:12.217901 systemd[1]: Started cri-containerd-b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5.scope - libcontainer container b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5. Nov 1 10:03:12.218091 containerd[1608]: time="2025-11-01T10:03:12.217964617Z" level=info msg="CreateContainer within sandbox \"8d856775e28563af8a7212c7330beb0ba493e66ed7000fc69b80b46aecb55c65\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde\"" Nov 1 10:03:12.218489 containerd[1608]: time="2025-11-01T10:03:12.218455031Z" level=info msg="StartContainer for \"1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde\"" Nov 1 10:03:12.220382 containerd[1608]: time="2025-11-01T10:03:12.220347489Z" level=info msg="connecting to shim 1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde" address="unix:///run/containerd/s/b4ed2731ca95c4a7ea5e94dbd97a97fb1c7c7ff68700b5abeca2aa710bf54bfc" protocol=ttrpc version=3 Nov 1 10:03:12.223000 systemd[1]: Started cri-containerd-56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca.scope - libcontainer container 56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca. Nov 1 10:03:12.250849 systemd[1]: Started cri-containerd-1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde.scope - libcontainer container 1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde. Nov 1 10:03:12.289982 containerd[1608]: time="2025-11-01T10:03:12.289932976Z" level=info msg="StartContainer for \"b5c3c14ed513fbdef9e4b51e0e507481badcffed6c3de6599e4ea92956d2e6a5\" returns successfully" Nov 1 10:03:12.295173 containerd[1608]: time="2025-11-01T10:03:12.295121918Z" level=info msg="StartContainer for \"56dea64ea0d1237f67a2c2051fa307edcbb471e6b9083f3c88c887586ab5bcca\" returns successfully" Nov 1 10:03:12.324551 containerd[1608]: time="2025-11-01T10:03:12.324510835Z" level=info msg="StartContainer for \"1106069ea4e7325f5d19713d24e755e19fd8caa33c37426ce275148147032fde\" returns successfully" Nov 1 10:03:12.437204 kubelet[2407]: I1101 10:03:12.437083 2407 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:12.437412 kubelet[2407]: E1101 10:03:12.437388 2407 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Nov 1 10:03:12.731678 kubelet[2407]: E1101 10:03:12.731627 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:12.732969 kubelet[2407]: E1101 10:03:12.732940 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:12.733094 kubelet[2407]: E1101 10:03:12.733071 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:12.733241 kubelet[2407]: E1101 10:03:12.733218 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:12.737112 kubelet[2407]: E1101 10:03:12.737083 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:12.737211 kubelet[2407]: E1101 10:03:12.737187 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:13.740138 kubelet[2407]: E1101 10:03:13.740092 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:13.740866 kubelet[2407]: E1101 10:03:13.740245 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:13.740866 kubelet[2407]: E1101 10:03:13.740558 2407 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 10:03:13.740866 kubelet[2407]: E1101 10:03:13.740647 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:14.040676 kubelet[2407]: I1101 10:03:14.040451 2407 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:14.355340 kubelet[2407]: E1101 10:03:14.355195 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 10:03:14.454625 kubelet[2407]: I1101 10:03:14.454574 2407 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:03:14.497020 kubelet[2407]: E1101 10:03:14.496913 2407 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1873d9d5958c2c52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 10:03:10.592248914 +0000 UTC m=+1.031891688,LastTimestamp:2025-11-01 10:03:10.592248914 +0000 UTC m=+1.031891688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 10:03:14.511205 kubelet[2407]: I1101 10:03:14.511161 2407 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:14.578726 kubelet[2407]: E1101 10:03:14.576640 2407 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:14.578726 kubelet[2407]: I1101 10:03:14.576759 2407 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:14.587457 kubelet[2407]: E1101 10:03:14.585252 2407 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:14.587879 kubelet[2407]: I1101 10:03:14.587836 2407 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:14.588381 kubelet[2407]: I1101 10:03:14.588344 2407 apiserver.go:52] "Watching apiserver" Nov 1 10:03:14.595205 kubelet[2407]: E1101 10:03:14.595176 2407 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:14.603422 kubelet[2407]: I1101 10:03:14.603354 2407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 10:03:14.740322 kubelet[2407]: I1101 10:03:14.740283 2407 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:14.742845 kubelet[2407]: E1101 10:03:14.742802 2407 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:14.743030 kubelet[2407]: E1101 10:03:14.742970 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:16.510037 kubelet[2407]: I1101 10:03:16.509984 2407 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:16.517514 kubelet[2407]: E1101 10:03:16.517480 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:16.744197 kubelet[2407]: E1101 10:03:16.744154 2407 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:17.006660 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-9.scope)... Nov 1 10:03:17.006678 systemd[1]: Reloading... Nov 1 10:03:17.103426 zram_generator::config[2733]: No configuration found. Nov 1 10:03:17.345833 systemd[1]: Reloading finished in 338 ms. Nov 1 10:03:17.371739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:03:17.399980 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 10:03:17.400299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:03:17.400362 systemd[1]: kubelet.service: Consumed 1.671s CPU time, 125.1M memory peak. Nov 1 10:03:17.402532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 10:03:17.637150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 10:03:17.647084 (kubelet)[2779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 10:03:17.691406 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 10:03:17.691406 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 10:03:17.691854 kubelet[2779]: I1101 10:03:17.691435 2779 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 10:03:17.698153 kubelet[2779]: I1101 10:03:17.698126 2779 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 10:03:17.698153 kubelet[2779]: I1101 10:03:17.698145 2779 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 10:03:17.698236 kubelet[2779]: I1101 10:03:17.698169 2779 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 10:03:17.698236 kubelet[2779]: I1101 10:03:17.698179 2779 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 10:03:17.698340 kubelet[2779]: I1101 10:03:17.698328 2779 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 10:03:17.699333 kubelet[2779]: I1101 10:03:17.699311 2779 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 10:03:17.702442 kubelet[2779]: I1101 10:03:17.701209 2779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 10:03:17.705763 kubelet[2779]: I1101 10:03:17.705742 2779 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 10:03:17.710505 kubelet[2779]: I1101 10:03:17.710458 2779 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 10:03:17.710821 kubelet[2779]: I1101 10:03:17.710771 2779 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 10:03:17.711105 kubelet[2779]: I1101 10:03:17.710810 2779 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 10:03:17.711196 kubelet[2779]: I1101 10:03:17.711106 2779 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 10:03:17.711196 kubelet[2779]: I1101 10:03:17.711118 2779 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 10:03:17.711196 kubelet[2779]: I1101 10:03:17.711146 2779 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 10:03:17.712249 kubelet[2779]: I1101 10:03:17.712221 2779 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:03:17.712439 kubelet[2779]: I1101 10:03:17.712413 2779 kubelet.go:475] "Attempting to sync node with API server" Nov 1 10:03:17.712439 kubelet[2779]: I1101 10:03:17.712429 2779 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 10:03:17.712503 kubelet[2779]: I1101 10:03:17.712454 2779 kubelet.go:387] "Adding apiserver pod source" Nov 1 10:03:17.712503 kubelet[2779]: I1101 10:03:17.712479 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 10:03:17.714826 kubelet[2779]: I1101 10:03:17.713488 2779 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 1 10:03:17.714826 kubelet[2779]: I1101 10:03:17.714031 2779 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 10:03:17.714826 kubelet[2779]: I1101 10:03:17.714059 2779 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 10:03:17.852746 kubelet[2779]: I1101 10:03:17.852601 2779 server.go:1262] "Started kubelet" Nov 1 10:03:17.855794 kubelet[2779]: I1101 10:03:17.855747 2779 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 10:03:17.857145 kubelet[2779]: I1101 10:03:17.857113 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 10:03:17.857276 kubelet[2779]: I1101 10:03:17.857250 2779 server.go:310] "Adding debug handlers to kubelet server" Nov 1 10:03:17.862630 kubelet[2779]: I1101 10:03:17.861605 2779 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 10:03:17.863000 kubelet[2779]: I1101 10:03:17.862978 2779 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 10:03:17.863367 kubelet[2779]: E1101 10:03:17.863347 2779 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 10:03:17.863653 kubelet[2779]: I1101 10:03:17.863639 2779 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 10:03:17.863889 kubelet[2779]: I1101 10:03:17.863876 2779 reconciler.go:29] "Reconciler: start to sync state" Nov 1 10:03:17.865550 kubelet[2779]: I1101 10:03:17.865489 2779 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 10:03:17.865623 kubelet[2779]: I1101 10:03:17.865557 2779 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 10:03:17.865906 kubelet[2779]: I1101 10:03:17.865878 2779 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 10:03:17.866799 kubelet[2779]: E1101 10:03:17.866767 2779 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 10:03:17.867624 kubelet[2779]: I1101 10:03:17.867015 2779 factory.go:223] Registration of the systemd container factory successfully Nov 1 10:03:17.867624 kubelet[2779]: I1101 10:03:17.867428 2779 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 10:03:17.870339 kubelet[2779]: I1101 10:03:17.870309 2779 factory.go:223] Registration of the containerd container factory successfully Nov 1 10:03:17.886654 kubelet[2779]: I1101 10:03:17.886599 2779 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 10:03:17.888769 kubelet[2779]: I1101 10:03:17.888327 2779 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 10:03:17.888769 kubelet[2779]: I1101 10:03:17.888365 2779 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 10:03:17.888769 kubelet[2779]: I1101 10:03:17.888411 2779 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 10:03:17.888769 kubelet[2779]: E1101 10:03:17.888464 2779 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 10:03:17.915479 kubelet[2779]: I1101 10:03:17.915418 2779 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 10:03:17.915479 kubelet[2779]: I1101 10:03:17.915449 2779 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 10:03:17.915479 kubelet[2779]: I1101 10:03:17.915476 2779 state_mem.go:36] "Initialized new in-memory state store" Nov 1 10:03:17.915684 kubelet[2779]: I1101 10:03:17.915648 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 10:03:17.915684 kubelet[2779]: I1101 10:03:17.915667 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 10:03:17.915763 kubelet[2779]: I1101 10:03:17.915713 2779 policy_none.go:49] "None policy: Start" Nov 1 10:03:17.915763 kubelet[2779]: I1101 10:03:17.915738 2779 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 10:03:17.915763 kubelet[2779]: I1101 10:03:17.915753 2779 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 10:03:17.916716 kubelet[2779]: I1101 10:03:17.915872 2779 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 10:03:17.916716 kubelet[2779]: I1101 10:03:17.915890 2779 policy_none.go:47] "Start" Nov 1 10:03:17.928474 kubelet[2779]: E1101 10:03:17.928337 2779 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 10:03:17.928604 kubelet[2779]: I1101 10:03:17.928538 2779 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 10:03:17.928604 kubelet[2779]: I1101 10:03:17.928550 2779 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 10:03:17.928798 kubelet[2779]: I1101 10:03:17.928771 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 10:03:17.932206 kubelet[2779]: E1101 10:03:17.932178 2779 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 10:03:17.990565 kubelet[2779]: I1101 10:03:17.989930 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:17.990565 kubelet[2779]: I1101 10:03:17.989956 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:17.990565 kubelet[2779]: I1101 10:03:17.989969 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:17.997437 kubelet[2779]: E1101 10:03:17.997382 2779 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.033712 kubelet[2779]: I1101 10:03:18.033649 2779 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 10:03:18.042359 kubelet[2779]: I1101 10:03:18.042244 2779 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 10:03:18.042359 kubelet[2779]: I1101 10:03:18.042326 2779 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 10:03:18.066078 kubelet[2779]: I1101 10:03:18.066025 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:18.066078 kubelet[2779]: I1101 10:03:18.066087 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.066336 kubelet[2779]: I1101 10:03:18.066114 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.066336 kubelet[2779]: I1101 10:03:18.066184 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.066336 kubelet[2779]: I1101 10:03:18.066236 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:18.066336 kubelet[2779]: I1101 10:03:18.066281 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:18.066336 kubelet[2779]: I1101 10:03:18.066305 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1ec20e517aba91a26cdeca58a533303-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1ec20e517aba91a26cdeca58a533303\") " pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:18.066484 kubelet[2779]: I1101 10:03:18.066320 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.066484 kubelet[2779]: I1101 10:03:18.066333 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.295097 kubelet[2779]: E1101 10:03:18.295050 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.296120 kubelet[2779]: E1101 10:03:18.296090 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.298236 kubelet[2779]: E1101 10:03:18.298193 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.713422 kubelet[2779]: I1101 10:03:18.713287 2779 apiserver.go:52] "Watching apiserver" Nov 1 10:03:18.763944 kubelet[2779]: I1101 10:03:18.763892 2779 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 10:03:18.901466 kubelet[2779]: I1101 10:03:18.901407 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.901608 kubelet[2779]: I1101 10:03:18.901407 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:18.901608 kubelet[2779]: I1101 10:03:18.901498 2779 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:18.908757 kubelet[2779]: E1101 10:03:18.908716 2779 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 10:03:18.909049 kubelet[2779]: E1101 10:03:18.908912 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.910124 kubelet[2779]: E1101 10:03:18.910091 2779 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 10:03:18.910244 kubelet[2779]: E1101 10:03:18.910218 2779 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 10:03:18.910420 kubelet[2779]: E1101 10:03:18.910289 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.910420 kubelet[2779]: E1101 10:03:18.910331 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:18.922971 kubelet[2779]: I1101 10:03:18.922891 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.922864042 podStartE2EDuration="2.922864042s" podCreationTimestamp="2025-11-01 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:03:18.92285203 +0000 UTC m=+1.269288738" watchObservedRunningTime="2025-11-01 10:03:18.922864042 +0000 UTC m=+1.269300740" Nov 1 10:03:18.930318 kubelet[2779]: I1101 10:03:18.930252 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9302330159999999 podStartE2EDuration="1.930233016s" podCreationTimestamp="2025-11-01 10:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:03:18.930022137 +0000 UTC m=+1.276458845" watchObservedRunningTime="2025-11-01 10:03:18.930233016 +0000 UTC m=+1.276669714" Nov 1 10:03:18.936673 kubelet[2779]: I1101 10:03:18.936624 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.936611643 podStartE2EDuration="1.936611643s" podCreationTimestamp="2025-11-01 10:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:03:18.936413085 +0000 UTC m=+1.282849773" watchObservedRunningTime="2025-11-01 10:03:18.936611643 +0000 UTC m=+1.283048341" Nov 1 10:03:19.903315 kubelet[2779]: E1101 10:03:19.903254 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:19.903883 kubelet[2779]: E1101 10:03:19.903348 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:19.903883 kubelet[2779]: E1101 10:03:19.903387 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:20.905195 kubelet[2779]: E1101 10:03:20.905154 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:20.905672 kubelet[2779]: E1101 10:03:20.905404 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:21.176268 update_engine[1587]: I20251101 10:03:21.176052 1587 update_attempter.cc:509] Updating boot flags... Nov 1 10:03:22.810307 kubelet[2779]: E1101 10:03:22.810254 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:22.908910 kubelet[2779]: E1101 10:03:22.908860 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:22.977517 kubelet[2779]: I1101 10:03:22.977473 2779 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 10:03:22.977997 containerd[1608]: time="2025-11-01T10:03:22.977943806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 10:03:22.978574 kubelet[2779]: I1101 10:03:22.978216 2779 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 10:03:23.784381 systemd[1]: Created slice kubepods-besteffort-podccbf578b_44fb_4aac_a520_11b032556bf6.slice - libcontainer container kubepods-besteffort-podccbf578b_44fb_4aac_a520_11b032556bf6.slice. Nov 1 10:03:23.801396 kubelet[2779]: I1101 10:03:23.801357 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccbf578b-44fb-4aac-a520-11b032556bf6-kube-proxy\") pod \"kube-proxy-d4pcl\" (UID: \"ccbf578b-44fb-4aac-a520-11b032556bf6\") " pod="kube-system/kube-proxy-d4pcl" Nov 1 10:03:23.801396 kubelet[2779]: I1101 10:03:23.801391 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccbf578b-44fb-4aac-a520-11b032556bf6-xtables-lock\") pod \"kube-proxy-d4pcl\" (UID: \"ccbf578b-44fb-4aac-a520-11b032556bf6\") " pod="kube-system/kube-proxy-d4pcl" Nov 1 10:03:23.801557 kubelet[2779]: I1101 10:03:23.801409 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccbf578b-44fb-4aac-a520-11b032556bf6-lib-modules\") pod \"kube-proxy-d4pcl\" (UID: \"ccbf578b-44fb-4aac-a520-11b032556bf6\") " pod="kube-system/kube-proxy-d4pcl" Nov 1 10:03:23.801557 kubelet[2779]: I1101 10:03:23.801430 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcx9\" (UniqueName: \"kubernetes.io/projected/ccbf578b-44fb-4aac-a520-11b032556bf6-kube-api-access-lrcx9\") pod \"kube-proxy-d4pcl\" (UID: \"ccbf578b-44fb-4aac-a520-11b032556bf6\") " pod="kube-system/kube-proxy-d4pcl" Nov 1 10:03:23.910718 kubelet[2779]: E1101 10:03:23.910225 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:24.100344 kubelet[2779]: E1101 10:03:24.100234 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:24.101058 containerd[1608]: time="2025-11-01T10:03:24.100881970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4pcl,Uid:ccbf578b-44fb-4aac-a520-11b032556bf6,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:24.126452 containerd[1608]: time="2025-11-01T10:03:24.126387302Z" level=info msg="connecting to shim 084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d" address="unix:///run/containerd/s/7bec1cc08b0c938e19eae45af231c6570e50ac69e736e9cef5b42295c59c1464" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:24.156382 systemd[1]: Created slice kubepods-besteffort-pod4200fc79_8d2c_4861_bd7f_75fbabaf49fc.slice - libcontainer container kubepods-besteffort-pod4200fc79_8d2c_4861_bd7f_75fbabaf49fc.slice. Nov 1 10:03:24.187896 systemd[1]: Started cri-containerd-084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d.scope - libcontainer container 084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d. Nov 1 10:03:24.203674 kubelet[2779]: I1101 10:03:24.203637 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgwp\" (UniqueName: \"kubernetes.io/projected/4200fc79-8d2c-4861-bd7f-75fbabaf49fc-kube-api-access-kwgwp\") pod \"tigera-operator-65cdcdfd6d-wdm29\" (UID: \"4200fc79-8d2c-4861-bd7f-75fbabaf49fc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wdm29" Nov 1 10:03:24.203674 kubelet[2779]: I1101 10:03:24.203672 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4200fc79-8d2c-4861-bd7f-75fbabaf49fc-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-wdm29\" (UID: \"4200fc79-8d2c-4861-bd7f-75fbabaf49fc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wdm29" Nov 1 10:03:24.307726 containerd[1608]: time="2025-11-01T10:03:24.307610848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4pcl,Uid:ccbf578b-44fb-4aac-a520-11b032556bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d\"" Nov 1 10:03:24.308449 kubelet[2779]: E1101 10:03:24.308417 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:24.319149 containerd[1608]: time="2025-11-01T10:03:24.319083465Z" level=info msg="CreateContainer within sandbox \"084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 10:03:24.331480 containerd[1608]: time="2025-11-01T10:03:24.331427619Z" level=info msg="Container 3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:24.340841 containerd[1608]: time="2025-11-01T10:03:24.340789899Z" level=info msg="CreateContainer within sandbox \"084fa26c61f8802a17342b9085f2e3f81b9403dbea1ab4dd57c69f7995cab14d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186\"" Nov 1 10:03:24.341544 containerd[1608]: time="2025-11-01T10:03:24.341513345Z" level=info msg="StartContainer for \"3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186\"" Nov 1 10:03:24.343228 containerd[1608]: time="2025-11-01T10:03:24.343173101Z" level=info msg="connecting to shim 3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186" address="unix:///run/containerd/s/7bec1cc08b0c938e19eae45af231c6570e50ac69e736e9cef5b42295c59c1464" protocol=ttrpc version=3 Nov 1 10:03:24.369864 systemd[1]: Started cri-containerd-3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186.scope - libcontainer container 3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186. Nov 1 10:03:24.421303 containerd[1608]: time="2025-11-01T10:03:24.421256928Z" level=info msg="StartContainer for \"3b23b8c3ae9d68e378036b63a03cbb2f6b11f63ff7fcde265f84290521063186\" returns successfully" Nov 1 10:03:24.464337 containerd[1608]: time="2025-11-01T10:03:24.464288936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wdm29,Uid:4200fc79-8d2c-4861-bd7f-75fbabaf49fc,Namespace:tigera-operator,Attempt:0,}" Nov 1 10:03:24.514499 containerd[1608]: time="2025-11-01T10:03:24.514425037Z" level=info msg="connecting to shim 9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20" address="unix:///run/containerd/s/f5553c5f54343ca7876ce138e7a43dde0f00e2d9828362df0afc113755562ed4" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:24.560856 systemd[1]: Started cri-containerd-9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20.scope - libcontainer container 9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20. Nov 1 10:03:24.608446 containerd[1608]: time="2025-11-01T10:03:24.608394790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wdm29,Uid:4200fc79-8d2c-4861-bd7f-75fbabaf49fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20\"" Nov 1 10:03:24.610273 containerd[1608]: time="2025-11-01T10:03:24.610210180Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 10:03:24.922336 kubelet[2779]: E1101 10:03:24.922288 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:24.952572 kubelet[2779]: I1101 10:03:24.952496 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4pcl" podStartSLOduration=1.951956838 podStartE2EDuration="1.951956838s" podCreationTimestamp="2025-11-01 10:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:03:24.951622045 +0000 UTC m=+7.298058744" watchObservedRunningTime="2025-11-01 10:03:24.951956838 +0000 UTC m=+7.298393536" Nov 1 10:03:26.461407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882267970.mount: Deactivated successfully. Nov 1 10:03:26.795488 containerd[1608]: time="2025-11-01T10:03:26.795416609Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:26.796273 containerd[1608]: time="2025-11-01T10:03:26.796229504Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Nov 1 10:03:26.797317 containerd[1608]: time="2025-11-01T10:03:26.797272483Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:26.799147 containerd[1608]: time="2025-11-01T10:03:26.799108989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:26.799720 containerd[1608]: time="2025-11-01T10:03:26.799661712Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.189415104s" Nov 1 10:03:26.799720 containerd[1608]: time="2025-11-01T10:03:26.799732406Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 10:03:26.808337 containerd[1608]: time="2025-11-01T10:03:26.808300510Z" level=info msg="CreateContainer within sandbox \"9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 10:03:26.818477 containerd[1608]: time="2025-11-01T10:03:26.816798503Z" level=info msg="Container fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:26.820227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900455191.mount: Deactivated successfully. Nov 1 10:03:26.824296 containerd[1608]: time="2025-11-01T10:03:26.824257222Z" level=info msg="CreateContainer within sandbox \"9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13\"" Nov 1 10:03:26.824848 containerd[1608]: time="2025-11-01T10:03:26.824806399Z" level=info msg="StartContainer for \"fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13\"" Nov 1 10:03:26.825782 containerd[1608]: time="2025-11-01T10:03:26.825749299Z" level=info msg="connecting to shim fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13" address="unix:///run/containerd/s/f5553c5f54343ca7876ce138e7a43dde0f00e2d9828362df0afc113755562ed4" protocol=ttrpc version=3 Nov 1 10:03:26.849856 systemd[1]: Started cri-containerd-fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13.scope - libcontainer container fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13. Nov 1 10:03:26.882372 containerd[1608]: time="2025-11-01T10:03:26.882325408Z" level=info msg="StartContainer for \"fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13\" returns successfully" Nov 1 10:03:26.937958 kubelet[2779]: I1101 10:03:26.937901 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-wdm29" podStartSLOduration=0.742909207 podStartE2EDuration="2.937884598s" podCreationTimestamp="2025-11-01 10:03:24 +0000 UTC" firstStartedPulling="2025-11-01 10:03:24.609835141 +0000 UTC m=+6.956271829" lastFinishedPulling="2025-11-01 10:03:26.804810522 +0000 UTC m=+9.151247220" observedRunningTime="2025-11-01 10:03:26.937213562 +0000 UTC m=+9.283650260" watchObservedRunningTime="2025-11-01 10:03:26.937884598 +0000 UTC m=+9.284321296" Nov 1 10:03:27.532455 kubelet[2779]: E1101 10:03:27.532401 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:27.929804 kubelet[2779]: E1101 10:03:27.929659 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:28.932994 kubelet[2779]: E1101 10:03:28.932922 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:28.963859 systemd[1]: cri-containerd-fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13.scope: Deactivated successfully. Nov 1 10:03:28.967014 containerd[1608]: time="2025-11-01T10:03:28.966953370Z" level=info msg="received exit event container_id:\"fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13\" id:\"fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13\" pid:3130 exit_status:1 exited_at:{seconds:1761991408 nanos:965601670}" Nov 1 10:03:28.997929 kubelet[2779]: E1101 10:03:28.997874 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:29.003568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13-rootfs.mount: Deactivated successfully. Nov 1 10:03:29.936343 kubelet[2779]: I1101 10:03:29.936302 2779 scope.go:117] "RemoveContainer" containerID="fd91ccdea1272b4c7f017d04aa5c12fd1c3e75bea521e830756ad718d32c6d13" Nov 1 10:03:29.938417 containerd[1608]: time="2025-11-01T10:03:29.937991274Z" level=info msg="CreateContainer within sandbox \"9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 10:03:29.949496 containerd[1608]: time="2025-11-01T10:03:29.949448817Z" level=info msg="Container 78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:29.952787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853129024.mount: Deactivated successfully. Nov 1 10:03:29.960480 containerd[1608]: time="2025-11-01T10:03:29.960433798Z" level=info msg="CreateContainer within sandbox \"9abd6703fcc4747a4f520dcbfbb1ce6627a28253bb6502bffbc67d9ec7a82b20\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3\"" Nov 1 10:03:29.961660 containerd[1608]: time="2025-11-01T10:03:29.961630415Z" level=info msg="StartContainer for \"78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3\"" Nov 1 10:03:29.963846 containerd[1608]: time="2025-11-01T10:03:29.963820184Z" level=info msg="connecting to shim 78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3" address="unix:///run/containerd/s/f5553c5f54343ca7876ce138e7a43dde0f00e2d9828362df0afc113755562ed4" protocol=ttrpc version=3 Nov 1 10:03:29.992831 systemd[1]: Started cri-containerd-78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3.scope - libcontainer container 78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3. Nov 1 10:03:30.029005 containerd[1608]: time="2025-11-01T10:03:30.028957550Z" level=info msg="StartContainer for \"78d8b0471f06957bec293d33fa60c81f85832d8b111213e806074272819104b3\" returns successfully" Nov 1 10:03:32.170227 sudo[1834]: pam_unix(sudo:session): session closed for user root Nov 1 10:03:32.172029 sshd[1833]: Connection closed by 10.0.0.1 port 59142 Nov 1 10:03:32.172477 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Nov 1 10:03:32.177815 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:59142.service: Deactivated successfully. Nov 1 10:03:32.180059 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 10:03:32.180279 systemd[1]: session-9.scope: Consumed 6.477s CPU time, 224.3M memory peak. Nov 1 10:03:32.181648 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Nov 1 10:03:32.182980 systemd-logind[1586]: Removed session 9. Nov 1 10:03:37.338977 systemd[1]: Created slice kubepods-besteffort-pod75d67c41_6d11_4103_825d_a3c69da7b471.slice - libcontainer container kubepods-besteffort-pod75d67c41_6d11_4103_825d_a3c69da7b471.slice. Nov 1 10:03:37.391049 kubelet[2779]: I1101 10:03:37.390993 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d67c41-6d11-4103-825d-a3c69da7b471-tigera-ca-bundle\") pod \"calico-typha-6f7cf8db75-r2fdv\" (UID: \"75d67c41-6d11-4103-825d-a3c69da7b471\") " pod="calico-system/calico-typha-6f7cf8db75-r2fdv" Nov 1 10:03:37.391049 kubelet[2779]: I1101 10:03:37.391043 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75d67c41-6d11-4103-825d-a3c69da7b471-typha-certs\") pod \"calico-typha-6f7cf8db75-r2fdv\" (UID: \"75d67c41-6d11-4103-825d-a3c69da7b471\") " pod="calico-system/calico-typha-6f7cf8db75-r2fdv" Nov 1 10:03:37.391049 kubelet[2779]: I1101 10:03:37.391060 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hn84\" (UniqueName: \"kubernetes.io/projected/75d67c41-6d11-4103-825d-a3c69da7b471-kube-api-access-8hn84\") pod \"calico-typha-6f7cf8db75-r2fdv\" (UID: \"75d67c41-6d11-4103-825d-a3c69da7b471\") " pod="calico-system/calico-typha-6f7cf8db75-r2fdv" Nov 1 10:03:37.521310 systemd[1]: Created slice kubepods-besteffort-pod7ed0574f_ca61_43fe_a02d_0ca0754687f3.slice - libcontainer container kubepods-besteffort-pod7ed0574f_ca61_43fe_a02d_0ca0754687f3.slice. Nov 1 10:03:37.592536 kubelet[2779]: I1101 10:03:37.592418 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-cni-net-dir\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592536 kubelet[2779]: I1101 10:03:37.592455 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ed0574f-ca61-43fe-a02d-0ca0754687f3-node-certs\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592536 kubelet[2779]: I1101 10:03:37.592473 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-var-lib-calico\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592536 kubelet[2779]: I1101 10:03:37.592490 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmffl\" (UniqueName: \"kubernetes.io/projected/7ed0574f-ca61-43fe-a02d-0ca0754687f3-kube-api-access-gmffl\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592536 kubelet[2779]: I1101 10:03:37.592524 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-cni-bin-dir\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592765 kubelet[2779]: I1101 10:03:37.592538 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-cni-log-dir\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592765 kubelet[2779]: I1101 10:03:37.592558 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ed0574f-ca61-43fe-a02d-0ca0754687f3-tigera-ca-bundle\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592765 kubelet[2779]: I1101 10:03:37.592640 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-var-run-calico\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592765 kubelet[2779]: I1101 10:03:37.592676 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-flexvol-driver-host\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592765 kubelet[2779]: I1101 10:03:37.592726 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-policysync\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592890 kubelet[2779]: I1101 10:03:37.592746 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-lib-modules\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.592890 kubelet[2779]: I1101 10:03:37.592762 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ed0574f-ca61-43fe-a02d-0ca0754687f3-xtables-lock\") pod \"calico-node-4ht4n\" (UID: \"7ed0574f-ca61-43fe-a02d-0ca0754687f3\") " pod="calico-system/calico-node-4ht4n" Nov 1 10:03:37.648977 kubelet[2779]: E1101 10:03:37.648918 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:37.649932 containerd[1608]: time="2025-11-01T10:03:37.649883394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7cf8db75-r2fdv,Uid:75d67c41-6d11-4103-825d-a3c69da7b471,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:37.664541 kubelet[2779]: E1101 10:03:37.664335 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:37.688542 containerd[1608]: time="2025-11-01T10:03:37.688395144Z" level=info msg="connecting to shim 10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb" address="unix:///run/containerd/s/9b10b6e47a1b1e3c1b21bd4adbe62d503961c15eff6f5c18ca3e2609266c5386" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:37.694449 kubelet[2779]: I1101 10:03:37.693650 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f1319238-e7a7-4b12-ace8-ba38b42b1817-registration-dir\") pod \"csi-node-driver-87p4w\" (UID: \"f1319238-e7a7-4b12-ace8-ba38b42b1817\") " pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:37.694449 kubelet[2779]: I1101 10:03:37.694003 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f1319238-e7a7-4b12-ace8-ba38b42b1817-socket-dir\") pod \"csi-node-driver-87p4w\" (UID: \"f1319238-e7a7-4b12-ace8-ba38b42b1817\") " pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:37.696914 kubelet[2779]: I1101 10:03:37.695155 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfcg\" (UniqueName: \"kubernetes.io/projected/f1319238-e7a7-4b12-ace8-ba38b42b1817-kube-api-access-8rfcg\") pod \"csi-node-driver-87p4w\" (UID: \"f1319238-e7a7-4b12-ace8-ba38b42b1817\") " pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:37.696914 kubelet[2779]: I1101 10:03:37.695285 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1319238-e7a7-4b12-ace8-ba38b42b1817-kubelet-dir\") pod \"csi-node-driver-87p4w\" (UID: \"f1319238-e7a7-4b12-ace8-ba38b42b1817\") " pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:37.696914 kubelet[2779]: I1101 10:03:37.696062 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f1319238-e7a7-4b12-ace8-ba38b42b1817-varrun\") pod \"csi-node-driver-87p4w\" (UID: \"f1319238-e7a7-4b12-ace8-ba38b42b1817\") " pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:37.705155 kubelet[2779]: E1101 10:03:37.704628 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.705155 kubelet[2779]: W1101 10:03:37.704656 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.705155 kubelet[2779]: E1101 10:03:37.704710 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.719933 kubelet[2779]: E1101 10:03:37.719876 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.719933 kubelet[2779]: W1101 10:03:37.719904 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.719933 kubelet[2779]: E1101 10:03:37.719927 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.738208 systemd[1]: Started cri-containerd-10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb.scope - libcontainer container 10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb. Nov 1 10:03:37.791078 containerd[1608]: time="2025-11-01T10:03:37.791030868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7cf8db75-r2fdv,Uid:75d67c41-6d11-4103-825d-a3c69da7b471,Namespace:calico-system,Attempt:0,} returns sandbox id \"10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb\"" Nov 1 10:03:37.797501 kubelet[2779]: E1101 10:03:37.797463 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.797501 kubelet[2779]: W1101 10:03:37.797486 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.797501 kubelet[2779]: E1101 10:03:37.797507 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.797776 kubelet[2779]: E1101 10:03:37.797758 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.797776 kubelet[2779]: W1101 10:03:37.797772 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.797830 kubelet[2779]: E1101 10:03:37.797783 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.798042 kubelet[2779]: E1101 10:03:37.798024 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.798042 kubelet[2779]: W1101 10:03:37.798038 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.798108 kubelet[2779]: E1101 10:03:37.798049 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.798244 kubelet[2779]: E1101 10:03:37.798216 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.798244 kubelet[2779]: W1101 10:03:37.798233 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.798244 kubelet[2779]: E1101 10:03:37.798241 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.798590 kubelet[2779]: E1101 10:03:37.798468 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.798590 kubelet[2779]: W1101 10:03:37.798493 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.798590 kubelet[2779]: E1101 10:03:37.798521 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.798765 kubelet[2779]: E1101 10:03:37.798647 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:37.798848 kubelet[2779]: E1101 10:03:37.798827 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.798848 kubelet[2779]: W1101 10:03:37.798846 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.798905 kubelet[2779]: E1101 10:03:37.798860 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.799214 kubelet[2779]: E1101 10:03:37.799190 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.799214 kubelet[2779]: W1101 10:03:37.799205 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.799272 kubelet[2779]: E1101 10:03:37.799216 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.799409 kubelet[2779]: E1101 10:03:37.799392 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.799409 kubelet[2779]: W1101 10:03:37.799404 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.799460 kubelet[2779]: E1101 10:03:37.799413 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.799639 kubelet[2779]: E1101 10:03:37.799610 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.799639 kubelet[2779]: W1101 10:03:37.799625 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.799639 kubelet[2779]: E1101 10:03:37.799638 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.799858 containerd[1608]: time="2025-11-01T10:03:37.799642319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 10:03:37.800382 kubelet[2779]: E1101 10:03:37.800356 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.800382 kubelet[2779]: W1101 10:03:37.800375 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.800495 kubelet[2779]: E1101 10:03:37.800395 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.800653 kubelet[2779]: E1101 10:03:37.800634 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.800653 kubelet[2779]: W1101 10:03:37.800646 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.800776 kubelet[2779]: E1101 10:03:37.800657 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.800983 kubelet[2779]: E1101 10:03:37.800960 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.800983 kubelet[2779]: W1101 10:03:37.800974 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.801054 kubelet[2779]: E1101 10:03:37.800988 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.801227 kubelet[2779]: E1101 10:03:37.801205 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.801227 kubelet[2779]: W1101 10:03:37.801220 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.801284 kubelet[2779]: E1101 10:03:37.801229 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.801550 kubelet[2779]: E1101 10:03:37.801526 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.801550 kubelet[2779]: W1101 10:03:37.801539 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.801550 kubelet[2779]: E1101 10:03:37.801549 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.801854 kubelet[2779]: E1101 10:03:37.801786 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.801854 kubelet[2779]: W1101 10:03:37.801795 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.801854 kubelet[2779]: E1101 10:03:37.801804 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.802065 kubelet[2779]: E1101 10:03:37.802044 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.802065 kubelet[2779]: W1101 10:03:37.802062 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.802132 kubelet[2779]: E1101 10:03:37.802076 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.802382 kubelet[2779]: E1101 10:03:37.802352 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.802417 kubelet[2779]: W1101 10:03:37.802393 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.802417 kubelet[2779]: E1101 10:03:37.802405 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.802642 kubelet[2779]: E1101 10:03:37.802619 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.802642 kubelet[2779]: W1101 10:03:37.802634 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.802642 kubelet[2779]: E1101 10:03:37.802645 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.802929 kubelet[2779]: E1101 10:03:37.802911 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.802929 kubelet[2779]: W1101 10:03:37.802925 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.802994 kubelet[2779]: E1101 10:03:37.802936 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.803219 kubelet[2779]: E1101 10:03:37.803200 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.803219 kubelet[2779]: W1101 10:03:37.803214 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.803291 kubelet[2779]: E1101 10:03:37.803226 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.803502 kubelet[2779]: E1101 10:03:37.803484 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.803502 kubelet[2779]: W1101 10:03:37.803498 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.803580 kubelet[2779]: E1101 10:03:37.803511 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.803833 kubelet[2779]: E1101 10:03:37.803811 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.803833 kubelet[2779]: W1101 10:03:37.803827 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.803914 kubelet[2779]: E1101 10:03:37.803839 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.804170 kubelet[2779]: E1101 10:03:37.804146 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.804170 kubelet[2779]: W1101 10:03:37.804162 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.804287 kubelet[2779]: E1101 10:03:37.804175 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.804474 kubelet[2779]: E1101 10:03:37.804443 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.804474 kubelet[2779]: W1101 10:03:37.804458 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.804474 kubelet[2779]: E1101 10:03:37.804470 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.804746 kubelet[2779]: E1101 10:03:37.804731 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.804746 kubelet[2779]: W1101 10:03:37.804744 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.804825 kubelet[2779]: E1101 10:03:37.804755 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.812394 kubelet[2779]: E1101 10:03:37.812376 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:37.812394 kubelet[2779]: W1101 10:03:37.812390 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:37.812483 kubelet[2779]: E1101 10:03:37.812402 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:37.827631 kubelet[2779]: E1101 10:03:37.827598 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:37.828206 containerd[1608]: time="2025-11-01T10:03:37.828152624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4ht4n,Uid:7ed0574f-ca61-43fe-a02d-0ca0754687f3,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:37.853829 containerd[1608]: time="2025-11-01T10:03:37.850088911Z" level=info msg="connecting to shim 7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f" address="unix:///run/containerd/s/47c74d4efc82bf5b3e10b16932771b70750c489e9503805bffa154878802a55e" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:03:37.888471 systemd[1]: Started cri-containerd-7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f.scope - libcontainer container 7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f. Nov 1 10:03:37.932947 containerd[1608]: time="2025-11-01T10:03:37.932890813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4ht4n,Uid:7ed0574f-ca61-43fe-a02d-0ca0754687f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\"" Nov 1 10:03:37.933605 kubelet[2779]: E1101 10:03:37.933560 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:39.251855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761971934.mount: Deactivated successfully. Nov 1 10:03:39.891034 kubelet[2779]: E1101 10:03:39.890977 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:40.426873 containerd[1608]: time="2025-11-01T10:03:40.426794908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:40.427873 containerd[1608]: time="2025-11-01T10:03:40.427820837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 1 10:03:40.428968 containerd[1608]: time="2025-11-01T10:03:40.428930934Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:40.431383 containerd[1608]: time="2025-11-01T10:03:40.431292514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:40.432234 containerd[1608]: time="2025-11-01T10:03:40.432204179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.632521544s" Nov 1 10:03:40.432289 containerd[1608]: time="2025-11-01T10:03:40.432239635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 10:03:40.433967 containerd[1608]: time="2025-11-01T10:03:40.433929733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 10:03:40.447204 containerd[1608]: time="2025-11-01T10:03:40.447169014Z" level=info msg="CreateContainer within sandbox \"10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 10:03:40.454389 containerd[1608]: time="2025-11-01T10:03:40.454351579Z" level=info msg="Container be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:40.462446 containerd[1608]: time="2025-11-01T10:03:40.462416192Z" level=info msg="CreateContainer within sandbox \"10f31170eb8ffa23e4d6f1aba0e3e7aec54d8f5b6819b30a58217a021ba083bb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37\"" Nov 1 10:03:40.463099 containerd[1608]: time="2025-11-01T10:03:40.463060813Z" level=info msg="StartContainer for \"be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37\"" Nov 1 10:03:40.464252 containerd[1608]: time="2025-11-01T10:03:40.464226064Z" level=info msg="connecting to shim be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37" address="unix:///run/containerd/s/9b10b6e47a1b1e3c1b21bd4adbe62d503961c15eff6f5c18ca3e2609266c5386" protocol=ttrpc version=3 Nov 1 10:03:40.483960 systemd[1]: Started cri-containerd-be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37.scope - libcontainer container be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37. Nov 1 10:03:40.541175 containerd[1608]: time="2025-11-01T10:03:40.541130336Z" level=info msg="StartContainer for \"be6763c8b36a536fd0aa6b6c81f09e28425961d6b440c61fd57cf67df12bbc37\" returns successfully" Nov 1 10:03:40.977873 kubelet[2779]: E1101 10:03:40.977835 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:40.993349 kubelet[2779]: I1101 10:03:40.993267 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f7cf8db75-r2fdv" podStartSLOduration=1.358963207 podStartE2EDuration="3.993244499s" podCreationTimestamp="2025-11-01 10:03:37 +0000 UTC" firstStartedPulling="2025-11-01 10:03:37.799251043 +0000 UTC m=+20.145687741" lastFinishedPulling="2025-11-01 10:03:40.433532335 +0000 UTC m=+22.779969033" observedRunningTime="2025-11-01 10:03:40.993092564 +0000 UTC m=+23.339529272" watchObservedRunningTime="2025-11-01 10:03:40.993244499 +0000 UTC m=+23.339681197" Nov 1 10:03:40.998125 kubelet[2779]: E1101 10:03:40.998099 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.998125 kubelet[2779]: W1101 10:03:40.998118 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.998217 kubelet[2779]: E1101 10:03:40.998138 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.998343 kubelet[2779]: E1101 10:03:40.998319 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.998343 kubelet[2779]: W1101 10:03:40.998338 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.998403 kubelet[2779]: E1101 10:03:40.998348 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.998572 kubelet[2779]: E1101 10:03:40.998555 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.998572 kubelet[2779]: W1101 10:03:40.998567 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.998641 kubelet[2779]: E1101 10:03:40.998577 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.998863 kubelet[2779]: E1101 10:03:40.998842 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.998863 kubelet[2779]: W1101 10:03:40.998857 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.998933 kubelet[2779]: E1101 10:03:40.998868 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.999083 kubelet[2779]: E1101 10:03:40.999065 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.999083 kubelet[2779]: W1101 10:03:40.999077 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.999148 kubelet[2779]: E1101 10:03:40.999088 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.999279 kubelet[2779]: E1101 10:03:40.999262 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.999279 kubelet[2779]: W1101 10:03:40.999275 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.999348 kubelet[2779]: E1101 10:03:40.999289 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.999488 kubelet[2779]: E1101 10:03:40.999470 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.999488 kubelet[2779]: W1101 10:03:40.999483 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.999564 kubelet[2779]: E1101 10:03:40.999493 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.999724 kubelet[2779]: E1101 10:03:40.999673 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.999724 kubelet[2779]: W1101 10:03:40.999705 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.999724 kubelet[2779]: E1101 10:03:40.999714 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:40.999946 kubelet[2779]: E1101 10:03:40.999878 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:40.999946 kubelet[2779]: W1101 10:03:40.999886 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:40.999946 kubelet[2779]: E1101 10:03:40.999893 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.000074 kubelet[2779]: E1101 10:03:41.000057 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.000074 kubelet[2779]: W1101 10:03:41.000069 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.000128 kubelet[2779]: E1101 10:03:41.000078 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.000245 kubelet[2779]: E1101 10:03:41.000229 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.000245 kubelet[2779]: W1101 10:03:41.000240 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.000304 kubelet[2779]: E1101 10:03:41.000248 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.000413 kubelet[2779]: E1101 10:03:41.000393 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.000413 kubelet[2779]: W1101 10:03:41.000405 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.000413 kubelet[2779]: E1101 10:03:41.000413 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.000628 kubelet[2779]: E1101 10:03:41.000611 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.000628 kubelet[2779]: W1101 10:03:41.000623 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.000628 kubelet[2779]: E1101 10:03:41.000631 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.000834 kubelet[2779]: E1101 10:03:41.000818 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.000834 kubelet[2779]: W1101 10:03:41.000828 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.000885 kubelet[2779]: E1101 10:03:41.000836 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.001015 kubelet[2779]: E1101 10:03:41.000999 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.001015 kubelet[2779]: W1101 10:03:41.001009 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.001066 kubelet[2779]: E1101 10:03:41.001018 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.023246 kubelet[2779]: E1101 10:03:41.023215 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.023246 kubelet[2779]: W1101 10:03:41.023239 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.023351 kubelet[2779]: E1101 10:03:41.023260 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.023526 kubelet[2779]: E1101 10:03:41.023495 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.023526 kubelet[2779]: W1101 10:03:41.023524 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.023604 kubelet[2779]: E1101 10:03:41.023547 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.023828 kubelet[2779]: E1101 10:03:41.023808 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.023828 kubelet[2779]: W1101 10:03:41.023823 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.023890 kubelet[2779]: E1101 10:03:41.023836 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.024066 kubelet[2779]: E1101 10:03:41.024041 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.024066 kubelet[2779]: W1101 10:03:41.024053 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.024066 kubelet[2779]: E1101 10:03:41.024061 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.024265 kubelet[2779]: E1101 10:03:41.024248 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.024296 kubelet[2779]: W1101 10:03:41.024269 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.024296 kubelet[2779]: E1101 10:03:41.024278 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.024553 kubelet[2779]: E1101 10:03:41.024528 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.024553 kubelet[2779]: W1101 10:03:41.024541 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.024553 kubelet[2779]: E1101 10:03:41.024551 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.024880 kubelet[2779]: E1101 10:03:41.024865 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.024880 kubelet[2779]: W1101 10:03:41.024878 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.024952 kubelet[2779]: E1101 10:03:41.024889 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.025118 kubelet[2779]: E1101 10:03:41.025099 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.025118 kubelet[2779]: W1101 10:03:41.025114 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.025169 kubelet[2779]: E1101 10:03:41.025123 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.025302 kubelet[2779]: E1101 10:03:41.025286 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.025302 kubelet[2779]: W1101 10:03:41.025297 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.025356 kubelet[2779]: E1101 10:03:41.025304 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.025481 kubelet[2779]: E1101 10:03:41.025464 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.025481 kubelet[2779]: W1101 10:03:41.025475 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.025481 kubelet[2779]: E1101 10:03:41.025483 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.025715 kubelet[2779]: E1101 10:03:41.025683 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.025715 kubelet[2779]: W1101 10:03:41.025706 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.025715 kubelet[2779]: E1101 10:03:41.025715 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.025958 kubelet[2779]: E1101 10:03:41.025940 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.025958 kubelet[2779]: W1101 10:03:41.025954 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.026042 kubelet[2779]: E1101 10:03:41.025964 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.026205 kubelet[2779]: E1101 10:03:41.026192 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.026205 kubelet[2779]: W1101 10:03:41.026202 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.026258 kubelet[2779]: E1101 10:03:41.026210 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.026396 kubelet[2779]: E1101 10:03:41.026383 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.026396 kubelet[2779]: W1101 10:03:41.026393 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.026452 kubelet[2779]: E1101 10:03:41.026400 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.026567 kubelet[2779]: E1101 10:03:41.026554 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.026567 kubelet[2779]: W1101 10:03:41.026563 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.026636 kubelet[2779]: E1101 10:03:41.026571 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.026783 kubelet[2779]: E1101 10:03:41.026769 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.026783 kubelet[2779]: W1101 10:03:41.026779 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.026849 kubelet[2779]: E1101 10:03:41.026787 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.027067 kubelet[2779]: E1101 10:03:41.027048 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.027105 kubelet[2779]: W1101 10:03:41.027077 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.027105 kubelet[2779]: E1101 10:03:41.027086 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.027300 kubelet[2779]: E1101 10:03:41.027283 2779 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 10:03:41.027300 kubelet[2779]: W1101 10:03:41.027294 2779 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 10:03:41.027372 kubelet[2779]: E1101 10:03:41.027302 2779 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 10:03:41.665271 containerd[1608]: time="2025-11-01T10:03:41.665209982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:41.666082 containerd[1608]: time="2025-11-01T10:03:41.666033870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 1 10:03:41.667243 containerd[1608]: time="2025-11-01T10:03:41.667188461Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:41.670035 containerd[1608]: time="2025-11-01T10:03:41.670000458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:41.670713 containerd[1608]: time="2025-11-01T10:03:41.670651010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.236684478s" Nov 1 10:03:41.670742 containerd[1608]: time="2025-11-01T10:03:41.670716774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 10:03:41.675141 containerd[1608]: time="2025-11-01T10:03:41.675110633Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 10:03:41.683763 containerd[1608]: time="2025-11-01T10:03:41.683733423Z" level=info msg="Container 5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:41.691020 containerd[1608]: time="2025-11-01T10:03:41.690993991Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2\"" Nov 1 10:03:41.691508 containerd[1608]: time="2025-11-01T10:03:41.691475336Z" level=info msg="StartContainer for \"5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2\"" Nov 1 10:03:41.692964 containerd[1608]: time="2025-11-01T10:03:41.692932996Z" level=info msg="connecting to shim 5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2" address="unix:///run/containerd/s/47c74d4efc82bf5b3e10b16932771b70750c489e9503805bffa154878802a55e" protocol=ttrpc version=3 Nov 1 10:03:41.715836 systemd[1]: Started cri-containerd-5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2.scope - libcontainer container 5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2. Nov 1 10:03:41.758957 containerd[1608]: time="2025-11-01T10:03:41.758908229Z" level=info msg="StartContainer for \"5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2\" returns successfully" Nov 1 10:03:41.781626 systemd[1]: cri-containerd-5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2.scope: Deactivated successfully. Nov 1 10:03:41.782055 systemd[1]: cri-containerd-5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2.scope: Consumed 40ms CPU time, 6.5M memory peak, 4.6M written to disk. Nov 1 10:03:41.786178 containerd[1608]: time="2025-11-01T10:03:41.786128438Z" level=info msg="received exit event container_id:\"5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2\" id:\"5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2\" pid:3489 exited_at:{seconds:1761991421 nanos:785824356}" Nov 1 10:03:41.815018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ca559533e2c2806d68f0cb5b312e6fcbf4e44cdd753dd648dc15908af1754d2-rootfs.mount: Deactivated successfully. Nov 1 10:03:41.889869 kubelet[2779]: E1101 10:03:41.889804 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:41.980773 kubelet[2779]: I1101 10:03:41.980730 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:03:41.981284 kubelet[2779]: E1101 10:03:41.981098 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:41.981316 kubelet[2779]: E1101 10:03:41.981293 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:42.983848 kubelet[2779]: E1101 10:03:42.983809 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:42.984842 containerd[1608]: time="2025-11-01T10:03:42.984803055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 10:03:43.888779 kubelet[2779]: E1101 10:03:43.888718 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:45.785616 containerd[1608]: time="2025-11-01T10:03:45.785530472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:45.786549 containerd[1608]: time="2025-11-01T10:03:45.786504281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 1 10:03:45.787939 containerd[1608]: time="2025-11-01T10:03:45.787877430Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:45.790910 containerd[1608]: time="2025-11-01T10:03:45.790831801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:45.791876 containerd[1608]: time="2025-11-01T10:03:45.791836759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.806989652s" Nov 1 10:03:45.791931 containerd[1608]: time="2025-11-01T10:03:45.791879379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 10:03:45.796501 containerd[1608]: time="2025-11-01T10:03:45.796463161Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 10:03:45.807504 containerd[1608]: time="2025-11-01T10:03:45.807428381Z" level=info msg="Container 127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:45.816231 containerd[1608]: time="2025-11-01T10:03:45.816179442Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895\"" Nov 1 10:03:45.816937 containerd[1608]: time="2025-11-01T10:03:45.816892973Z" level=info msg="StartContainer for \"127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895\"" Nov 1 10:03:45.818401 containerd[1608]: time="2025-11-01T10:03:45.818371671Z" level=info msg="connecting to shim 127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895" address="unix:///run/containerd/s/47c74d4efc82bf5b3e10b16932771b70750c489e9503805bffa154878802a55e" protocol=ttrpc version=3 Nov 1 10:03:45.842847 systemd[1]: Started cri-containerd-127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895.scope - libcontainer container 127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895. Nov 1 10:03:45.891019 kubelet[2779]: E1101 10:03:45.890958 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:45.896771 containerd[1608]: time="2025-11-01T10:03:45.896673249Z" level=info msg="StartContainer for \"127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895\" returns successfully" Nov 1 10:03:45.994186 kubelet[2779]: E1101 10:03:45.994119 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:46.995561 kubelet[2779]: E1101 10:03:46.995523 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:47.446686 systemd[1]: cri-containerd-127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895.scope: Deactivated successfully. Nov 1 10:03:47.447138 systemd[1]: cri-containerd-127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895.scope: Consumed 697ms CPU time, 176.2M memory peak, 2.4M read from disk, 171.3M written to disk. Nov 1 10:03:47.456532 containerd[1608]: time="2025-11-01T10:03:47.456478193Z" level=info msg="received exit event container_id:\"127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895\" id:\"127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895\" pid:3550 exited_at:{seconds:1761991427 nanos:447371768}" Nov 1 10:03:47.480979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-127e92e6682366de0156018ae581f1d9858fec123f70a168ca287a0473d6b895-rootfs.mount: Deactivated successfully. Nov 1 10:03:47.539487 kubelet[2779]: I1101 10:03:47.539445 2779 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 10:03:47.895972 systemd[1]: Created slice kubepods-besteffort-podf1319238_e7a7_4b12_ace8_ba38b42b1817.slice - libcontainer container kubepods-besteffort-podf1319238_e7a7_4b12_ace8_ba38b42b1817.slice. Nov 1 10:03:48.028756 containerd[1608]: time="2025-11-01T10:03:48.028217112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87p4w,Uid:f1319238-e7a7-4b12-ace8-ba38b42b1817,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:48.072257 systemd[1]: Created slice kubepods-burstable-podb60ca747_ed05_425c_b136_cbad03ffb49a.slice - libcontainer container kubepods-burstable-podb60ca747_ed05_425c_b136_cbad03ffb49a.slice. Nov 1 10:03:48.080638 systemd[1]: Created slice kubepods-besteffort-pod73e3568f_83c0_4547_b599_b88c34a1197a.slice - libcontainer container kubepods-besteffort-pod73e3568f_83c0_4547_b599_b88c34a1197a.slice. Nov 1 10:03:48.092065 systemd[1]: Created slice kubepods-besteffort-pod36a5d4ac_e857_4a98_81db_164d84811165.slice - libcontainer container kubepods-besteffort-pod36a5d4ac_e857_4a98_81db_164d84811165.slice. Nov 1 10:03:48.105349 systemd[1]: Created slice kubepods-besteffort-pod23971a0d_bbad_4a54_8dd4_48e851f76667.slice - libcontainer container kubepods-besteffort-pod23971a0d_bbad_4a54_8dd4_48e851f76667.slice. Nov 1 10:03:48.112460 systemd[1]: Created slice kubepods-besteffort-podc31bf260_9897_44ba_bd03_511f60db4011.slice - libcontainer container kubepods-besteffort-podc31bf260_9897_44ba_bd03_511f60db4011.slice. Nov 1 10:03:48.129099 systemd[1]: Created slice kubepods-besteffort-pod99cd5c6d_98ce_4f16_8916_17196a6ab807.slice - libcontainer container kubepods-besteffort-pod99cd5c6d_98ce_4f16_8916_17196a6ab807.slice. Nov 1 10:03:48.135297 systemd[1]: Created slice kubepods-burstable-podffd8f8af_8b24_4377_881c_64726e81556e.slice - libcontainer container kubepods-burstable-podffd8f8af_8b24_4377_881c_64726e81556e.slice. Nov 1 10:03:48.161395 kubelet[2779]: I1101 10:03:48.161190 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-ca-bundle\") pod \"whisker-7b8b55f966-9xfp9\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " pod="calico-system/whisker-7b8b55f966-9xfp9" Nov 1 10:03:48.161395 kubelet[2779]: I1101 10:03:48.161252 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcnbd\" (UniqueName: \"kubernetes.io/projected/36a5d4ac-e857-4a98-81db-164d84811165-kube-api-access-qcnbd\") pod \"calico-apiserver-7f4f8b5f58-5vt8f\" (UID: \"36a5d4ac-e857-4a98-81db-164d84811165\") " pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" Nov 1 10:03:48.161395 kubelet[2779]: I1101 10:03:48.161278 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e3568f-83c0-4547-b599-b88c34a1197a-config\") pod \"goldmane-7c778bb748-h2vs7\" (UID: \"73e3568f-83c0-4547-b599-b88c34a1197a\") " pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.161395 kubelet[2779]: I1101 10:03:48.161300 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e3568f-83c0-4547-b599-b88c34a1197a-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-h2vs7\" (UID: \"73e3568f-83c0-4547-b599-b88c34a1197a\") " pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.161395 kubelet[2779]: I1101 10:03:48.161326 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh9q\" (UniqueName: \"kubernetes.io/projected/99cd5c6d-98ce-4f16-8916-17196a6ab807-kube-api-access-2bh9q\") pod \"calico-kube-controllers-799ff88558-vv4cn\" (UID: \"99cd5c6d-98ce-4f16-8916-17196a6ab807\") " pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" Nov 1 10:03:48.162009 kubelet[2779]: I1101 10:03:48.161345 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60ca747-ed05-425c-b136-cbad03ffb49a-config-volume\") pod \"coredns-66bc5c9577-n7rkn\" (UID: \"b60ca747-ed05-425c-b136-cbad03ffb49a\") " pod="kube-system/coredns-66bc5c9577-n7rkn" Nov 1 10:03:48.162009 kubelet[2779]: I1101 10:03:48.161366 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mkx\" (UniqueName: \"kubernetes.io/projected/b60ca747-ed05-425c-b136-cbad03ffb49a-kube-api-access-q9mkx\") pod \"coredns-66bc5c9577-n7rkn\" (UID: \"b60ca747-ed05-425c-b136-cbad03ffb49a\") " pod="kube-system/coredns-66bc5c9577-n7rkn" Nov 1 10:03:48.162009 kubelet[2779]: I1101 10:03:48.161401 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrth\" (UniqueName: \"kubernetes.io/projected/c31bf260-9897-44ba-bd03-511f60db4011-kube-api-access-tvrth\") pod \"calico-apiserver-7f4f8b5f58-dt86s\" (UID: \"c31bf260-9897-44ba-bd03-511f60db4011\") " pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" Nov 1 10:03:48.162009 kubelet[2779]: I1101 10:03:48.161423 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmd2t\" (UniqueName: \"kubernetes.io/projected/73e3568f-83c0-4547-b599-b88c34a1197a-kube-api-access-mmd2t\") pod \"goldmane-7c778bb748-h2vs7\" (UID: \"73e3568f-83c0-4547-b599-b88c34a1197a\") " pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.162009 kubelet[2779]: I1101 10:03:48.161450 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c31bf260-9897-44ba-bd03-511f60db4011-calico-apiserver-certs\") pod \"calico-apiserver-7f4f8b5f58-dt86s\" (UID: \"c31bf260-9897-44ba-bd03-511f60db4011\") " pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" Nov 1 10:03:48.162203 kubelet[2779]: I1101 10:03:48.161471 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36a5d4ac-e857-4a98-81db-164d84811165-calico-apiserver-certs\") pod \"calico-apiserver-7f4f8b5f58-5vt8f\" (UID: \"36a5d4ac-e857-4a98-81db-164d84811165\") " pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" Nov 1 10:03:48.162203 kubelet[2779]: I1101 10:03:48.161496 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99cd5c6d-98ce-4f16-8916-17196a6ab807-tigera-ca-bundle\") pod \"calico-kube-controllers-799ff88558-vv4cn\" (UID: \"99cd5c6d-98ce-4f16-8916-17196a6ab807\") " pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" Nov 1 10:03:48.162203 kubelet[2779]: I1101 10:03:48.161515 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/73e3568f-83c0-4547-b599-b88c34a1197a-goldmane-key-pair\") pod \"goldmane-7c778bb748-h2vs7\" (UID: \"73e3568f-83c0-4547-b599-b88c34a1197a\") " pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.162203 kubelet[2779]: I1101 10:03:48.161555 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-backend-key-pair\") pod \"whisker-7b8b55f966-9xfp9\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " pod="calico-system/whisker-7b8b55f966-9xfp9" Nov 1 10:03:48.162203 kubelet[2779]: I1101 10:03:48.161579 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2m5\" (UniqueName: \"kubernetes.io/projected/23971a0d-bbad-4a54-8dd4-48e851f76667-kube-api-access-wb2m5\") pod \"whisker-7b8b55f966-9xfp9\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " pod="calico-system/whisker-7b8b55f966-9xfp9" Nov 1 10:03:48.219561 containerd[1608]: time="2025-11-01T10:03:48.219485709Z" level=error msg="Failed to destroy network for sandbox \"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.222097 systemd[1]: run-netns-cni\x2d83199867\x2d0b6d\x2dad6d\x2deb83\x2d3ec7f6d6c500.mount: Deactivated successfully. Nov 1 10:03:48.224157 containerd[1608]: time="2025-11-01T10:03:48.224079237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87p4w,Uid:f1319238-e7a7-4b12-ace8-ba38b42b1817,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.235168 kubelet[2779]: E1101 10:03:48.235086 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.235267 kubelet[2779]: E1101 10:03:48.235188 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:48.235267 kubelet[2779]: E1101 10:03:48.235209 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-87p4w" Nov 1 10:03:48.235342 kubelet[2779]: E1101 10:03:48.235269 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"389f245131efc29a6855ad6f83dfe97efd9eb7cdb96e1759d7427e61d7c60089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:03:48.263723 kubelet[2779]: I1101 10:03:48.262654 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kf8v\" (UniqueName: \"kubernetes.io/projected/ffd8f8af-8b24-4377-881c-64726e81556e-kube-api-access-7kf8v\") pod \"coredns-66bc5c9577-ll4v8\" (UID: \"ffd8f8af-8b24-4377-881c-64726e81556e\") " pod="kube-system/coredns-66bc5c9577-ll4v8" Nov 1 10:03:48.263723 kubelet[2779]: I1101 10:03:48.262900 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd8f8af-8b24-4377-881c-64726e81556e-config-volume\") pod \"coredns-66bc5c9577-ll4v8\" (UID: \"ffd8f8af-8b24-4377-881c-64726e81556e\") " pod="kube-system/coredns-66bc5c9577-ll4v8" Nov 1 10:03:48.379802 kubelet[2779]: E1101 10:03:48.379758 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:48.380417 containerd[1608]: time="2025-11-01T10:03:48.380347710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n7rkn,Uid:b60ca747-ed05-425c-b136-cbad03ffb49a,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:48.387682 containerd[1608]: time="2025-11-01T10:03:48.387629245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2vs7,Uid:73e3568f-83c0-4547-b599-b88c34a1197a,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:48.402590 containerd[1608]: time="2025-11-01T10:03:48.400895290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-5vt8f,Uid:36a5d4ac-e857-4a98-81db-164d84811165,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:03:48.426535 containerd[1608]: time="2025-11-01T10:03:48.426414627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-dt86s,Uid:c31bf260-9897-44ba-bd03-511f60db4011,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:03:48.427930 containerd[1608]: time="2025-11-01T10:03:48.427902331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8b55f966-9xfp9,Uid:23971a0d-bbad-4a54-8dd4-48e851f76667,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:48.435708 containerd[1608]: time="2025-11-01T10:03:48.435626347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799ff88558-vv4cn,Uid:99cd5c6d-98ce-4f16-8916-17196a6ab807,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:48.440436 kubelet[2779]: E1101 10:03:48.440323 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:48.441094 containerd[1608]: time="2025-11-01T10:03:48.440774125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ll4v8,Uid:ffd8f8af-8b24-4377-881c-64726e81556e,Namespace:kube-system,Attempt:0,}" Nov 1 10:03:48.456727 containerd[1608]: time="2025-11-01T10:03:48.456627199Z" level=error msg="Failed to destroy network for sandbox \"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.463120 containerd[1608]: time="2025-11-01T10:03:48.463045984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n7rkn,Uid:b60ca747-ed05-425c-b136-cbad03ffb49a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.465463 kubelet[2779]: E1101 10:03:48.465379 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.465556 kubelet[2779]: E1101 10:03:48.465469 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n7rkn" Nov 1 10:03:48.465556 kubelet[2779]: E1101 10:03:48.465500 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-n7rkn" Nov 1 10:03:48.465644 kubelet[2779]: E1101 10:03:48.465567 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-n7rkn_kube-system(b60ca747-ed05-425c-b136-cbad03ffb49a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-n7rkn_kube-system(b60ca747-ed05-425c-b136-cbad03ffb49a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59d20ef47efaabf27e63e7ee197be26b3d0843149daed811ba9a5341721a53a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-n7rkn" podUID="b60ca747-ed05-425c-b136-cbad03ffb49a" Nov 1 10:03:48.513150 containerd[1608]: time="2025-11-01T10:03:48.513071806Z" level=error msg="Failed to destroy network for sandbox \"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.516680 systemd[1]: run-netns-cni\x2df58b4c2e\x2d002d\x2d2192\x2d0bd9\x2de006e48f85ea.mount: Deactivated successfully. Nov 1 10:03:48.520381 containerd[1608]: time="2025-11-01T10:03:48.520321431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2vs7,Uid:73e3568f-83c0-4547-b599-b88c34a1197a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.520815 kubelet[2779]: E1101 10:03:48.520702 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.520944 kubelet[2779]: E1101 10:03:48.520831 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.520944 kubelet[2779]: E1101 10:03:48.520858 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h2vs7" Nov 1 10:03:48.521934 kubelet[2779]: E1101 10:03:48.520939 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h2vs7_calico-system(73e3568f-83c0-4547-b599-b88c34a1197a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h2vs7_calico-system(73e3568f-83c0-4547-b599-b88c34a1197a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98d2f2c4ae560b8b03bd6be914a3ef8ff127c465b5f315928d95664b51f7366e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:03:48.528226 containerd[1608]: time="2025-11-01T10:03:48.528141818Z" level=error msg="Failed to destroy network for sandbox \"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.531424 systemd[1]: run-netns-cni\x2ddc3b4318\x2d146e\x2dc19a\x2d5724\x2d52df809480d4.mount: Deactivated successfully. Nov 1 10:03:48.532284 containerd[1608]: time="2025-11-01T10:03:48.532225558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-5vt8f,Uid:36a5d4ac-e857-4a98-81db-164d84811165,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.533929 kubelet[2779]: E1101 10:03:48.532567 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.533929 kubelet[2779]: E1101 10:03:48.532646 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" Nov 1 10:03:48.533929 kubelet[2779]: E1101 10:03:48.532685 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" Nov 1 10:03:48.534081 kubelet[2779]: E1101 10:03:48.532774 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f8b5f58-5vt8f_calico-apiserver(36a5d4ac-e857-4a98-81db-164d84811165)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f8b5f58-5vt8f_calico-apiserver(36a5d4ac-e857-4a98-81db-164d84811165)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ab1b1033a40ad1b2c97733cf82b5c525c1ee4f2725d01305d4d9ca9346916e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:03:48.565522 containerd[1608]: time="2025-11-01T10:03:48.565431992Z" level=error msg="Failed to destroy network for sandbox \"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.569176 systemd[1]: run-netns-cni\x2d45fd5a47\x2dfbdb\x2d8032\x2db8e1\x2d3c6648625f09.mount: Deactivated successfully. Nov 1 10:03:48.569537 containerd[1608]: time="2025-11-01T10:03:48.569492858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-dt86s,Uid:c31bf260-9897-44ba-bd03-511f60db4011,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.570501 kubelet[2779]: E1101 10:03:48.570428 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.570632 kubelet[2779]: E1101 10:03:48.570513 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" Nov 1 10:03:48.570632 kubelet[2779]: E1101 10:03:48.570539 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" Nov 1 10:03:48.572805 kubelet[2779]: E1101 10:03:48.570617 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f4f8b5f58-dt86s_calico-apiserver(c31bf260-9897-44ba-bd03-511f60db4011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f4f8b5f58-dt86s_calico-apiserver(c31bf260-9897-44ba-bd03-511f60db4011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b36ff3acb83632d7e6fc51c22305558e292cfada59357f740cdc9661202dfaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:03:48.582294 containerd[1608]: time="2025-11-01T10:03:48.582223528Z" level=error msg="Failed to destroy network for sandbox \"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.583973 containerd[1608]: time="2025-11-01T10:03:48.583916577Z" level=error msg="Failed to destroy network for sandbox \"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.585111 systemd[1]: run-netns-cni\x2d7a6fd5f0\x2da368\x2d7ef3\x2d9658\x2dd5831fa01a50.mount: Deactivated successfully. Nov 1 10:03:48.585978 containerd[1608]: time="2025-11-01T10:03:48.585851802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8b55f966-9xfp9,Uid:23971a0d-bbad-4a54-8dd4-48e851f76667,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.586257 kubelet[2779]: E1101 10:03:48.586198 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.586342 kubelet[2779]: E1101 10:03:48.586307 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8b55f966-9xfp9" Nov 1 10:03:48.586405 kubelet[2779]: E1101 10:03:48.586343 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8b55f966-9xfp9" Nov 1 10:03:48.586595 kubelet[2779]: E1101 10:03:48.586431 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b8b55f966-9xfp9_calico-system(23971a0d-bbad-4a54-8dd4-48e851f76667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b8b55f966-9xfp9_calico-system(23971a0d-bbad-4a54-8dd4-48e851f76667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"227de54cf569ac9b1764cdceda645b2cfe2c5022465e86511e1717bceafe4c2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b8b55f966-9xfp9" podUID="23971a0d-bbad-4a54-8dd4-48e851f76667" Nov 1 10:03:48.588144 containerd[1608]: time="2025-11-01T10:03:48.588079867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799ff88558-vv4cn,Uid:99cd5c6d-98ce-4f16-8916-17196a6ab807,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.588425 kubelet[2779]: E1101 10:03:48.588318 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.588425 kubelet[2779]: E1101 10:03:48.588411 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" Nov 1 10:03:48.588594 kubelet[2779]: E1101 10:03:48.588432 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" Nov 1 10:03:48.588594 kubelet[2779]: E1101 10:03:48.588497 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-799ff88558-vv4cn_calico-system(99cd5c6d-98ce-4f16-8916-17196a6ab807)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-799ff88558-vv4cn_calico-system(99cd5c6d-98ce-4f16-8916-17196a6ab807)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342cb4d327b192c396baa54eb1a097cdf26733cf0d799ba94ca32638dbef92eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:03:48.606261 containerd[1608]: time="2025-11-01T10:03:48.606178256Z" level=error msg="Failed to destroy network for sandbox \"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.635121 containerd[1608]: time="2025-11-01T10:03:48.635046783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ll4v8,Uid:ffd8f8af-8b24-4377-881c-64726e81556e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.635427 kubelet[2779]: E1101 10:03:48.635352 2779 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 10:03:48.635486 kubelet[2779]: E1101 10:03:48.635435 2779 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ll4v8" Nov 1 10:03:48.635486 kubelet[2779]: E1101 10:03:48.635457 2779 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ll4v8" Nov 1 10:03:48.635554 kubelet[2779]: E1101 10:03:48.635517 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ll4v8_kube-system(ffd8f8af-8b24-4377-881c-64726e81556e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ll4v8_kube-system(ffd8f8af-8b24-4377-881c-64726e81556e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d11056f173f20ef82967d039b84f586afe6a9c516ac60d177e14a4b93156c162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ll4v8" podUID="ffd8f8af-8b24-4377-881c-64726e81556e" Nov 1 10:03:49.010724 kubelet[2779]: E1101 10:03:49.010649 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:49.011524 containerd[1608]: time="2025-11-01T10:03:49.011465314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 10:03:49.481438 systemd[1]: run-netns-cni\x2d31e8921b\x2db2b2\x2d6182\x2d24db\x2d9b7be483a763.mount: Deactivated successfully. Nov 1 10:03:49.481570 systemd[1]: run-netns-cni\x2d5b44bbee\x2d462b\x2ddb9e\x2d91b2\x2da8e03e253a3a.mount: Deactivated successfully. Nov 1 10:03:56.589439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987184163.mount: Deactivated successfully. Nov 1 10:03:57.610895 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:45792.service - OpenSSH per-connection server daemon (10.0.0.1:45792). Nov 1 10:03:57.883161 containerd[1608]: time="2025-11-01T10:03:57.882890968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:57.901521 containerd[1608]: time="2025-11-01T10:03:57.901467751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880766" Nov 1 10:03:57.942088 containerd[1608]: time="2025-11-01T10:03:57.941313079Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:57.947721 containerd[1608]: time="2025-11-01T10:03:57.946803934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 10:03:57.949708 containerd[1608]: time="2025-11-01T10:03:57.948437841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.936922664s" Nov 1 10:03:57.949820 containerd[1608]: time="2025-11-01T10:03:57.949799496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 10:03:57.986415 containerd[1608]: time="2025-11-01T10:03:57.986350943Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 10:03:58.014788 containerd[1608]: time="2025-11-01T10:03:58.014671681Z" level=info msg="Container 9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:03:58.018581 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 45792 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:03:58.021055 sshd-session[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:03:58.028034 systemd-logind[1586]: New session 10 of user core. Nov 1 10:03:58.029294 containerd[1608]: time="2025-11-01T10:03:58.029247385Z" level=info msg="CreateContainer within sandbox \"7e42a220b103e424ada9694d04e8a5ac9dd3aadaf5b4265e62050975d121b74f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c\"" Nov 1 10:03:58.030110 containerd[1608]: time="2025-11-01T10:03:58.030080267Z" level=info msg="StartContainer for \"9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c\"" Nov 1 10:03:58.034003 containerd[1608]: time="2025-11-01T10:03:58.033819244Z" level=info msg="connecting to shim 9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c" address="unix:///run/containerd/s/47c74d4efc82bf5b3e10b16932771b70750c489e9503805bffa154878802a55e" protocol=ttrpc version=3 Nov 1 10:03:58.035989 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 10:03:58.066064 systemd[1]: Started cri-containerd-9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c.scope - libcontainer container 9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c. Nov 1 10:03:58.126439 containerd[1608]: time="2025-11-01T10:03:58.126382149Z" level=info msg="StartContainer for \"9e6c5578435394befc55a7fc1422d5e51e76cd601a80193ea04e51e1b47c838c\" returns successfully" Nov 1 10:03:58.154316 sshd[3864]: Connection closed by 10.0.0.1 port 45792 Nov 1 10:03:58.155881 sshd-session[3858]: pam_unix(sshd:session): session closed for user core Nov 1 10:03:58.160252 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:45792.service: Deactivated successfully. Nov 1 10:03:58.162734 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 10:03:58.164062 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Nov 1 10:03:58.165544 systemd-logind[1586]: Removed session 10. Nov 1 10:03:58.216297 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 10:03:58.216557 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 10:03:58.432750 kubelet[2779]: I1101 10:03:58.432573 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-ca-bundle\") pod \"23971a0d-bbad-4a54-8dd4-48e851f76667\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " Nov 1 10:03:58.432750 kubelet[2779]: I1101 10:03:58.432622 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-backend-key-pair\") pod \"23971a0d-bbad-4a54-8dd4-48e851f76667\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " Nov 1 10:03:58.432750 kubelet[2779]: I1101 10:03:58.432654 2779 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2m5\" (UniqueName: \"kubernetes.io/projected/23971a0d-bbad-4a54-8dd4-48e851f76667-kube-api-access-wb2m5\") pod \"23971a0d-bbad-4a54-8dd4-48e851f76667\" (UID: \"23971a0d-bbad-4a54-8dd4-48e851f76667\") " Nov 1 10:03:58.433812 kubelet[2779]: I1101 10:03:58.433778 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "23971a0d-bbad-4a54-8dd4-48e851f76667" (UID: "23971a0d-bbad-4a54-8dd4-48e851f76667"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 10:03:58.437203 kubelet[2779]: I1101 10:03:58.437115 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23971a0d-bbad-4a54-8dd4-48e851f76667-kube-api-access-wb2m5" (OuterVolumeSpecName: "kube-api-access-wb2m5") pod "23971a0d-bbad-4a54-8dd4-48e851f76667" (UID: "23971a0d-bbad-4a54-8dd4-48e851f76667"). InnerVolumeSpecName "kube-api-access-wb2m5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 10:03:58.438321 kubelet[2779]: I1101 10:03:58.438276 2779 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "23971a0d-bbad-4a54-8dd4-48e851f76667" (UID: "23971a0d-bbad-4a54-8dd4-48e851f76667"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 10:03:58.533789 kubelet[2779]: I1101 10:03:58.533720 2779 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb2m5\" (UniqueName: \"kubernetes.io/projected/23971a0d-bbad-4a54-8dd4-48e851f76667-kube-api-access-wb2m5\") on node \"localhost\" DevicePath \"\"" Nov 1 10:03:58.533789 kubelet[2779]: I1101 10:03:58.533763 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 10:03:58.533789 kubelet[2779]: I1101 10:03:58.533771 2779 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/23971a0d-bbad-4a54-8dd4-48e851f76667-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 10:03:58.965436 systemd[1]: var-lib-kubelet-pods-23971a0d\x2dbbad\x2d4a54\x2d8dd4\x2d48e851f76667-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwb2m5.mount: Deactivated successfully. Nov 1 10:03:58.965573 systemd[1]: var-lib-kubelet-pods-23971a0d\x2dbbad\x2d4a54\x2d8dd4\x2d48e851f76667-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 10:03:59.048749 kubelet[2779]: E1101 10:03:59.048710 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:03:59.054556 systemd[1]: Removed slice kubepods-besteffort-pod23971a0d_bbad_4a54_8dd4_48e851f76667.slice - libcontainer container kubepods-besteffort-pod23971a0d_bbad_4a54_8dd4_48e851f76667.slice. Nov 1 10:03:59.064152 kubelet[2779]: I1101 10:03:59.064021 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4ht4n" podStartSLOduration=2.044532983 podStartE2EDuration="22.064000729s" podCreationTimestamp="2025-11-01 10:03:37 +0000 UTC" firstStartedPulling="2025-11-01 10:03:37.935336804 +0000 UTC m=+20.281773522" lastFinishedPulling="2025-11-01 10:03:57.95480458 +0000 UTC m=+40.301241268" observedRunningTime="2025-11-01 10:03:59.0628239 +0000 UTC m=+41.409260618" watchObservedRunningTime="2025-11-01 10:03:59.064000729 +0000 UTC m=+41.410437427" Nov 1 10:03:59.110193 systemd[1]: Created slice kubepods-besteffort-pode3758e25_c85f_48dd_a940_aa84442da027.slice - libcontainer container kubepods-besteffort-pode3758e25_c85f_48dd_a940_aa84442da027.slice. Nov 1 10:03:59.137189 kubelet[2779]: I1101 10:03:59.137123 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3758e25-c85f-48dd-a940-aa84442da027-whisker-ca-bundle\") pod \"whisker-b7474bcb8-zhrsz\" (UID: \"e3758e25-c85f-48dd-a940-aa84442da027\") " pod="calico-system/whisker-b7474bcb8-zhrsz" Nov 1 10:03:59.137189 kubelet[2779]: I1101 10:03:59.137202 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3758e25-c85f-48dd-a940-aa84442da027-whisker-backend-key-pair\") pod \"whisker-b7474bcb8-zhrsz\" (UID: \"e3758e25-c85f-48dd-a940-aa84442da027\") " pod="calico-system/whisker-b7474bcb8-zhrsz" Nov 1 10:03:59.137452 kubelet[2779]: I1101 10:03:59.137237 2779 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlt2\" (UniqueName: \"kubernetes.io/projected/e3758e25-c85f-48dd-a940-aa84442da027-kube-api-access-dwlt2\") pod \"whisker-b7474bcb8-zhrsz\" (UID: \"e3758e25-c85f-48dd-a940-aa84442da027\") " pod="calico-system/whisker-b7474bcb8-zhrsz" Nov 1 10:03:59.415978 containerd[1608]: time="2025-11-01T10:03:59.415902375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7474bcb8-zhrsz,Uid:e3758e25-c85f-48dd-a940-aa84442da027,Namespace:calico-system,Attempt:0,}" Nov 1 10:03:59.614758 systemd-networkd[1500]: cali5828b1505e4: Link UP Nov 1 10:03:59.615033 systemd-networkd[1500]: cali5828b1505e4: Gained carrier Nov 1 10:03:59.653800 containerd[1608]: 2025-11-01 10:03:59.440 [INFO][3947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:03:59.653800 containerd[1608]: 2025-11-01 10:03:59.458 [INFO][3947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b7474bcb8--zhrsz-eth0 whisker-b7474bcb8- calico-system e3758e25-c85f-48dd-a940-aa84442da027 1005 0 2025-11-01 10:03:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b7474bcb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b7474bcb8-zhrsz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5828b1505e4 [] [] }} ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-" Nov 1 10:03:59.653800 containerd[1608]: 2025-11-01 10:03:59.458 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.653800 containerd[1608]: 2025-11-01 10:03:59.552 [INFO][3962] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" HandleID="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Workload="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.553 [INFO][3962] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" HandleID="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Workload="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bec50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b7474bcb8-zhrsz", "timestamp":"2025-11-01 10:03:59.552672517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.553 [INFO][3962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.554 [INFO][3962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.554 [INFO][3962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.566 [INFO][3962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" host="localhost" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.574 [INFO][3962] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.580 [INFO][3962] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.581 [INFO][3962] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.584 [INFO][3962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:03:59.654117 containerd[1608]: 2025-11-01 10:03:59.584 [INFO][3962] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" host="localhost" Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.586 [INFO][3962] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9 Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.590 [INFO][3962] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" host="localhost" Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.598 [INFO][3962] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" host="localhost" Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.598 [INFO][3962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" host="localhost" Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.598 [INFO][3962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:03:59.658073 containerd[1608]: 2025-11-01 10:03:59.598 [INFO][3962] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" HandleID="k8s-pod-network.c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Workload="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.658271 containerd[1608]: 2025-11-01 10:03:59.602 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b7474bcb8--zhrsz-eth0", GenerateName:"whisker-b7474bcb8-", Namespace:"calico-system", SelfLink:"", UID:"e3758e25-c85f-48dd-a940-aa84442da027", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7474bcb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b7474bcb8-zhrsz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5828b1505e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:03:59.658271 containerd[1608]: 2025-11-01 10:03:59.603 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.658395 containerd[1608]: 2025-11-01 10:03:59.603 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5828b1505e4 ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.658395 containerd[1608]: 2025-11-01 10:03:59.615 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.658462 containerd[1608]: 2025-11-01 10:03:59.619 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b7474bcb8--zhrsz-eth0", GenerateName:"whisker-b7474bcb8-", Namespace:"calico-system", SelfLink:"", UID:"e3758e25-c85f-48dd-a940-aa84442da027", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b7474bcb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9", Pod:"whisker-b7474bcb8-zhrsz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5828b1505e4", MAC:"e6:d3:58:67:c6:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:03:59.658536 containerd[1608]: 2025-11-01 10:03:59.648 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" Namespace="calico-system" Pod="whisker-b7474bcb8-zhrsz" WorkloadEndpoint="localhost-k8s-whisker--b7474bcb8--zhrsz-eth0" Nov 1 10:03:59.892509 kubelet[2779]: I1101 10:03:59.892449 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23971a0d-bbad-4a54-8dd4-48e851f76667" path="/var/lib/kubelet/pods/23971a0d-bbad-4a54-8dd4-48e851f76667/volumes" Nov 1 10:03:59.991720 containerd[1608]: time="2025-11-01T10:03:59.990255546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799ff88558-vv4cn,Uid:99cd5c6d-98ce-4f16-8916-17196a6ab807,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:00.105163 systemd-networkd[1500]: cali7033904c3d8: Link UP Nov 1 10:04:00.106067 systemd-networkd[1500]: cali7033904c3d8: Gained carrier Nov 1 10:04:00.120079 containerd[1608]: 2025-11-01 10:04:00.026 [INFO][4078] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:00.120079 containerd[1608]: 2025-11-01 10:04:00.036 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0 calico-kube-controllers-799ff88558- calico-system 99cd5c6d-98ce-4f16-8916-17196a6ab807 892 0 2025-11-01 10:03:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:799ff88558 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-799ff88558-vv4cn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7033904c3d8 [] [] }} ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-" Nov 1 10:04:00.120079 containerd[1608]: 2025-11-01 10:04:00.036 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.120079 containerd[1608]: 2025-11-01 10:04:00.066 [INFO][4093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" HandleID="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Workload="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.066 [INFO][4093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" HandleID="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Workload="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eca0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-799ff88558-vv4cn", "timestamp":"2025-11-01 10:04:00.066313353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.066 [INFO][4093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.066 [INFO][4093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.066 [INFO][4093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.073 [INFO][4093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" host="localhost" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.076 [INFO][4093] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.081 [INFO][4093] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.082 [INFO][4093] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.085 [INFO][4093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:00.120384 containerd[1608]: 2025-11-01 10:04:00.085 [INFO][4093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" host="localhost" Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.086 [INFO][4093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3 Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.091 [INFO][4093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" host="localhost" Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.095 [INFO][4093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" host="localhost" Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.095 [INFO][4093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" host="localhost" Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.096 [INFO][4093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:00.120781 containerd[1608]: 2025-11-01 10:04:00.096 [INFO][4093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" HandleID="k8s-pod-network.61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Workload="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.120964 containerd[1608]: 2025-11-01 10:04:00.100 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0", GenerateName:"calico-kube-controllers-799ff88558-", Namespace:"calico-system", SelfLink:"", UID:"99cd5c6d-98ce-4f16-8916-17196a6ab807", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"799ff88558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-799ff88558-vv4cn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7033904c3d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:00.121036 containerd[1608]: 2025-11-01 10:04:00.101 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.121036 containerd[1608]: 2025-11-01 10:04:00.101 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7033904c3d8 ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.121036 containerd[1608]: 2025-11-01 10:04:00.105 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.121140 containerd[1608]: 2025-11-01 10:04:00.106 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0", GenerateName:"calico-kube-controllers-799ff88558-", Namespace:"calico-system", SelfLink:"", UID:"99cd5c6d-98ce-4f16-8916-17196a6ab807", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"799ff88558", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3", Pod:"calico-kube-controllers-799ff88558-vv4cn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7033904c3d8", MAC:"26:65:9b:41:8f:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:00.121222 containerd[1608]: 2025-11-01 10:04:00.114 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" Namespace="calico-system" Pod="calico-kube-controllers-799ff88558-vv4cn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--799ff88558--vv4cn-eth0" Nov 1 10:04:00.171948 containerd[1608]: time="2025-11-01T10:04:00.171795271Z" level=info msg="connecting to shim 61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3" address="unix:///run/containerd/s/1099a768e2baae366b3ca9e38811a999692ed37211fee91a0a50f4b0dcbb43d6" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:00.173735 containerd[1608]: time="2025-11-01T10:04:00.173578999Z" level=info msg="connecting to shim c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9" address="unix:///run/containerd/s/2356919ae548187f208d5fcd6856627efec7c46233b746ae8abe6e5ac0788cff" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:00.199821 systemd[1]: Started cri-containerd-c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9.scope - libcontainer container c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9. Nov 1 10:04:00.204881 systemd[1]: Started cri-containerd-61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3.scope - libcontainer container 61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3. Nov 1 10:04:00.217021 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:00.219905 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:00.363067 containerd[1608]: time="2025-11-01T10:04:00.363010842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b7474bcb8-zhrsz,Uid:e3758e25-c85f-48dd-a940-aa84442da027,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4ac6b7d5e131b503b58f677a63e7933a3b0462200ae0e3e96d2700bbfed0bc9\"" Nov 1 10:04:00.364684 containerd[1608]: time="2025-11-01T10:04:00.364650268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:04:00.475384 containerd[1608]: time="2025-11-01T10:04:00.475342051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-799ff88558-vv4cn,Uid:99cd5c6d-98ce-4f16-8916-17196a6ab807,Namespace:calico-system,Attempt:0,} returns sandbox id \"61ad4488f46c7803bf021b523e9d050db7402d01ccd3820c4b41bb48762f9cf3\"" Nov 1 10:04:00.888049 containerd[1608]: time="2025-11-01T10:04:00.887896055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:00.894716 containerd[1608]: time="2025-11-01T10:04:00.894648815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:04:00.894779 containerd[1608]: time="2025-11-01T10:04:00.894723055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:00.894998 kubelet[2779]: E1101 10:04:00.894950 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:00.895402 kubelet[2779]: E1101 10:04:00.895011 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:00.895402 kubelet[2779]: E1101 10:04:00.895121 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:00.895402 kubelet[2779]: E1101 10:04:00.895199 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:00.895664 containerd[1608]: time="2025-11-01T10:04:00.895605351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ll4v8,Uid:ffd8f8af-8b24-4377-881c-64726e81556e,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:00.895839 containerd[1608]: time="2025-11-01T10:04:00.895810445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:04:00.897753 containerd[1608]: time="2025-11-01T10:04:00.897560800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87p4w,Uid:f1319238-e7a7-4b12-ace8-ba38b42b1817,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:00.899522 kubelet[2779]: E1101 10:04:00.899438 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:00.899995 containerd[1608]: time="2025-11-01T10:04:00.899951867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n7rkn,Uid:b60ca747-ed05-425c-b136-cbad03ffb49a,Namespace:kube-system,Attempt:0,}" Nov 1 10:04:01.027527 systemd-networkd[1500]: cali74aa82912a9: Link UP Nov 1 10:04:01.028496 systemd-networkd[1500]: cali74aa82912a9: Gained carrier Nov 1 10:04:01.028809 systemd-networkd[1500]: cali5828b1505e4: Gained IPv6LL Nov 1 10:04:01.046117 containerd[1608]: 2025-11-01 10:04:00.935 [INFO][4226] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:01.046117 containerd[1608]: 2025-11-01 10:04:00.950 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ll4v8-eth0 coredns-66bc5c9577- kube-system ffd8f8af-8b24-4377-881c-64726e81556e 893 0 2025-11-01 10:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ll4v8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74aa82912a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-" Nov 1 10:04:01.046117 containerd[1608]: 2025-11-01 10:04:00.950 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.046117 containerd[1608]: 2025-11-01 10:04:00.982 [INFO][4269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" HandleID="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Workload="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:00.982 [INFO][4269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" HandleID="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Workload="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ll4v8", "timestamp":"2025-11-01 10:04:00.982324242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:00.982 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:00.982 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:00.982 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:00.992 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" host="localhost" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:01.000 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:01.004 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:01.006 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:01.007 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.046590 containerd[1608]: 2025-11-01 10:04:01.007 [INFO][4269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" host="localhost" Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.009 [INFO][4269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778 Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.012 [INFO][4269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" host="localhost" Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.018 [INFO][4269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" host="localhost" Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.018 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" host="localhost" Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.018 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:01.046955 containerd[1608]: 2025-11-01 10:04:01.018 [INFO][4269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" HandleID="k8s-pod-network.18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Workload="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.023 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ll4v8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ffd8f8af-8b24-4377-881c-64726e81556e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ll4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74aa82912a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.024 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.024 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74aa82912a9 ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.028 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.031 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ll4v8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ffd8f8af-8b24-4377-881c-64726e81556e", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778", Pod:"coredns-66bc5c9577-ll4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74aa82912a9", MAC:"2e:45:66:42:d7:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.047259 containerd[1608]: 2025-11-01 10:04:01.041 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" Namespace="kube-system" Pod="coredns-66bc5c9577-ll4v8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ll4v8-eth0" Nov 1 10:04:01.076555 containerd[1608]: time="2025-11-01T10:04:01.076513613Z" level=info msg="connecting to shim 18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778" address="unix:///run/containerd/s/702d22870cda7aa526df0babe8d409b5bf355bbc06ca46806f4231729b9f25bf" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:01.103113 systemd[1]: Started cri-containerd-18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778.scope - libcontainer container 18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778. Nov 1 10:04:01.128261 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:01.131310 systemd-networkd[1500]: calic8eea12937e: Link UP Nov 1 10:04:01.132213 systemd-networkd[1500]: calic8eea12937e: Gained carrier Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.934 [INFO][4236] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.955 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--87p4w-eth0 csi-node-driver- calico-system f1319238-e7a7-4b12-ace8-ba38b42b1817 767 0 2025-11-01 10:03:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-87p4w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8eea12937e [] [] }} ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.955 [INFO][4236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.994 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" HandleID="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Workload="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.994 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" HandleID="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Workload="localhost-k8s-csi--node--driver--87p4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019e7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-87p4w", "timestamp":"2025-11-01 10:04:00.994094798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:00.994 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.018 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.019 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.093 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.101 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.107 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.108 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.110 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.110 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.112 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.116 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" host="localhost" Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:01.152668 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" HandleID="k8s-pod-network.8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Workload="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.125 [INFO][4236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--87p4w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1319238-e7a7-4b12-ace8-ba38b42b1817", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-87p4w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8eea12937e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.125 [INFO][4236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.125 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8eea12937e ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.132 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.134 [INFO][4236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--87p4w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f1319238-e7a7-4b12-ace8-ba38b42b1817", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e", Pod:"csi-node-driver-87p4w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8eea12937e", MAC:"8a:3d:b3:4e:23:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.153530 containerd[1608]: 2025-11-01 10:04:01.147 [INFO][4236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" Namespace="calico-system" Pod="csi-node-driver-87p4w" WorkloadEndpoint="localhost-k8s-csi--node--driver--87p4w-eth0" Nov 1 10:04:01.177186 containerd[1608]: time="2025-11-01T10:04:01.177118611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ll4v8,Uid:ffd8f8af-8b24-4377-881c-64726e81556e,Namespace:kube-system,Attempt:0,} returns sandbox id \"18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778\"" Nov 1 10:04:01.178248 kubelet[2779]: E1101 10:04:01.178196 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:01.182597 containerd[1608]: time="2025-11-01T10:04:01.182525055Z" level=info msg="CreateContainer within sandbox \"18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:04:01.204302 containerd[1608]: time="2025-11-01T10:04:01.204223589Z" level=info msg="connecting to shim 8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e" address="unix:///run/containerd/s/feee612075552b80027cdc1361bb2c28a4004fa2e3feaf4ef178c11fc71b2f61" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:01.218458 containerd[1608]: time="2025-11-01T10:04:01.218401572Z" level=info msg="Container 1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:01.225833 containerd[1608]: time="2025-11-01T10:04:01.225782641Z" level=info msg="CreateContainer within sandbox \"18df15f05c5e1feba93980a1ade4819462947cc26d75487e091d8d797daed778\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4\"" Nov 1 10:04:01.229587 containerd[1608]: time="2025-11-01T10:04:01.228416603Z" level=info msg="StartContainer for \"1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4\"" Nov 1 10:04:01.231826 containerd[1608]: time="2025-11-01T10:04:01.231781367Z" level=info msg="connecting to shim 1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4" address="unix:///run/containerd/s/702d22870cda7aa526df0babe8d409b5bf355bbc06ca46806f4231729b9f25bf" protocol=ttrpc version=3 Nov 1 10:04:01.233946 containerd[1608]: time="2025-11-01T10:04:01.233913667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:01.235153 containerd[1608]: time="2025-11-01T10:04:01.235109922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:04:01.235227 containerd[1608]: time="2025-11-01T10:04:01.235183951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:01.235384 kubelet[2779]: E1101 10:04:01.235340 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:01.235441 kubelet[2779]: E1101 10:04:01.235393 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:01.235821 kubelet[2779]: E1101 10:04:01.235609 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-799ff88558-vv4cn_calico-system(99cd5c6d-98ce-4f16-8916-17196a6ab807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:01.235821 kubelet[2779]: E1101 10:04:01.235664 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:01.235904 containerd[1608]: time="2025-11-01T10:04:01.235818672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:04:01.241191 systemd[1]: Started cri-containerd-8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e.scope - libcontainer container 8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e. Nov 1 10:04:01.246313 systemd-networkd[1500]: calic2e9d99d9c7: Link UP Nov 1 10:04:01.248025 systemd-networkd[1500]: calic2e9d99d9c7: Gained carrier Nov 1 10:04:01.258375 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.954 [INFO][4243] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.969 [INFO][4243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--n7rkn-eth0 coredns-66bc5c9577- kube-system b60ca747-ed05-425c-b136-cbad03ffb49a 885 0 2025-11-01 10:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-n7rkn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic2e9d99d9c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.969 [INFO][4243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.998 [INFO][4284] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" HandleID="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Workload="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.998 [INFO][4284] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" HandleID="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Workload="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042a460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-n7rkn", "timestamp":"2025-11-01 10:04:00.998200722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:00.998 [INFO][4284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.121 [INFO][4284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.194 [INFO][4284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.200 [INFO][4284] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.207 [INFO][4284] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.211 [INFO][4284] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.213 [INFO][4284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.213 [INFO][4284] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.215 [INFO][4284] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95 Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.219 [INFO][4284] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.225 [INFO][4284] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.225 [INFO][4284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" host="localhost" Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.226 [INFO][4284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:01.268365 containerd[1608]: 2025-11-01 10:04:01.226 [INFO][4284] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" HandleID="k8s-pod-network.ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Workload="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.233 [INFO][4243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--n7rkn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b60ca747-ed05-425c-b136-cbad03ffb49a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-n7rkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2e9d99d9c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.233 [INFO][4243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.234 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2e9d99d9c7 ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.249 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.250 [INFO][4243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--n7rkn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b60ca747-ed05-425c-b136-cbad03ffb49a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95", Pod:"coredns-66bc5c9577-n7rkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2e9d99d9c7", MAC:"ca:b4:d2:49:0f:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:01.268933 containerd[1608]: 2025-11-01 10:04:01.263 [INFO][4243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" Namespace="kube-system" Pod="coredns-66bc5c9577-n7rkn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--n7rkn-eth0" Nov 1 10:04:01.272865 systemd[1]: Started cri-containerd-1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4.scope - libcontainer container 1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4. Nov 1 10:04:01.296840 containerd[1608]: time="2025-11-01T10:04:01.296671492Z" level=info msg="connecting to shim ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95" address="unix:///run/containerd/s/beb10f92510d62e464de9aa14e910754aaacd8a5ebff2bae4c0461202969f3c4" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:01.305727 containerd[1608]: time="2025-11-01T10:04:01.305544921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-87p4w,Uid:f1319238-e7a7-4b12-ace8-ba38b42b1817,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f6c4497f47d68d384ebac38eee5b2573291dd643c8ba24d8349bc4e20297c2e\"" Nov 1 10:04:01.317413 containerd[1608]: time="2025-11-01T10:04:01.317356503Z" level=info msg="StartContainer for \"1ba1c9de3c1fdf2942bd64a2d866d1b7fe8b6d25d688265a673da3dde3a222c4\" returns successfully" Nov 1 10:04:01.339981 systemd[1]: Started cri-containerd-ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95.scope - libcontainer container ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95. Nov 1 10:04:01.349917 systemd-networkd[1500]: cali7033904c3d8: Gained IPv6LL Nov 1 10:04:01.359508 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:01.394793 containerd[1608]: time="2025-11-01T10:04:01.394717398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-n7rkn,Uid:b60ca747-ed05-425c-b136-cbad03ffb49a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95\"" Nov 1 10:04:01.396424 kubelet[2779]: E1101 10:04:01.396218 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:01.401416 containerd[1608]: time="2025-11-01T10:04:01.401363889Z" level=info msg="CreateContainer within sandbox \"ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 10:04:01.426678 containerd[1608]: time="2025-11-01T10:04:01.426535159Z" level=info msg="Container 3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f: CDI devices from CRI Config.CDIDevices: []" Nov 1 10:04:01.434383 containerd[1608]: time="2025-11-01T10:04:01.434336035Z" level=info msg="CreateContainer within sandbox \"ba5d1ccc1309a19e87f3c6b4547c03f17dbabc67d5f03279bf6a3f86bb28aa95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f\"" Nov 1 10:04:01.435070 containerd[1608]: time="2025-11-01T10:04:01.435035328Z" level=info msg="StartContainer for \"3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f\"" Nov 1 10:04:01.435838 containerd[1608]: time="2025-11-01T10:04:01.435812165Z" level=info msg="connecting to shim 3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f" address="unix:///run/containerd/s/beb10f92510d62e464de9aa14e910754aaacd8a5ebff2bae4c0461202969f3c4" protocol=ttrpc version=3 Nov 1 10:04:01.466922 systemd[1]: Started cri-containerd-3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f.scope - libcontainer container 3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f. Nov 1 10:04:01.503301 containerd[1608]: time="2025-11-01T10:04:01.503263516Z" level=info msg="StartContainer for \"3f5a876903c1d5ba038b04d31670bb3c06066ccc0856519dd361bf4d90ce0d5f\" returns successfully" Nov 1 10:04:01.601661 containerd[1608]: time="2025-11-01T10:04:01.601584649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:01.603226 containerd[1608]: time="2025-11-01T10:04:01.603161397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:04:01.603226 containerd[1608]: time="2025-11-01T10:04:01.603205751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:01.603457 kubelet[2779]: E1101 10:04:01.603410 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:01.603457 kubelet[2779]: E1101 10:04:01.603454 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:01.603754 kubelet[2779]: E1101 10:04:01.603714 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:01.604117 kubelet[2779]: E1101 10:04:01.603777 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7474bcb8-zhrsz" podUID="e3758e25-c85f-48dd-a940-aa84442da027" Nov 1 10:04:01.604207 containerd[1608]: time="2025-11-01T10:04:01.603815335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:04:01.893995 containerd[1608]: time="2025-11-01T10:04:01.893792226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-5vt8f,Uid:36a5d4ac-e857-4a98-81db-164d84811165,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:04:01.936764 containerd[1608]: time="2025-11-01T10:04:01.936564802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:01.937978 containerd[1608]: time="2025-11-01T10:04:01.937928982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:04:01.938157 containerd[1608]: time="2025-11-01T10:04:01.937997009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:01.938328 kubelet[2779]: E1101 10:04:01.938268 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:01.938739 kubelet[2779]: E1101 10:04:01.938341 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:01.938739 kubelet[2779]: E1101 10:04:01.938448 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:01.939831 containerd[1608]: time="2025-11-01T10:04:01.939796345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:04:02.006017 systemd-networkd[1500]: calicfb1b7ec98c: Link UP Nov 1 10:04:02.006821 systemd-networkd[1500]: calicfb1b7ec98c: Gained carrier Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.925 [INFO][4543] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.935 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0 calico-apiserver-7f4f8b5f58- calico-apiserver 36a5d4ac-e857-4a98-81db-164d84811165 888 0 2025-11-01 10:03:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f8b5f58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f4f8b5f58-5vt8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicfb1b7ec98c [] [] }} ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.935 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.963 [INFO][4558] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" HandleID="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.963 [INFO][4558] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" HandleID="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f4f8b5f58-5vt8f", "timestamp":"2025-11-01 10:04:01.963466979 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.963 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.964 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.964 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.971 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.976 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.982 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.984 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.986 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.986 [INFO][4558] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.988 [INFO][4558] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109 Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.992 [INFO][4558] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.998 [INFO][4558] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.999 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" host="localhost" Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.999 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:02.021018 containerd[1608]: 2025-11-01 10:04:01.999 [INFO][4558] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" HandleID="k8s-pod-network.ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.002 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0", GenerateName:"calico-apiserver-7f4f8b5f58-", Namespace:"calico-apiserver", SelfLink:"", UID:"36a5d4ac-e857-4a98-81db-164d84811165", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f8b5f58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f4f8b5f58-5vt8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfb1b7ec98c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.003 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.003 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfb1b7ec98c ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.007 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.007 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0", GenerateName:"calico-apiserver-7f4f8b5f58-", Namespace:"calico-apiserver", SelfLink:"", UID:"36a5d4ac-e857-4a98-81db-164d84811165", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f8b5f58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109", Pod:"calico-apiserver-7f4f8b5f58-5vt8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicfb1b7ec98c", MAC:"aa:52:6e:0b:b1:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:02.021649 containerd[1608]: 2025-11-01 10:04:02.016 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-5vt8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--5vt8f-eth0" Nov 1 10:04:02.055719 containerd[1608]: time="2025-11-01T10:04:02.055661121Z" level=info msg="connecting to shim ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109" address="unix:///run/containerd/s/01650ae5325fd746cef01fa6b1c28fdb9a2f2f4738878abd14ff055e345e56eb" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:02.067999 kubelet[2779]: E1101 10:04:02.067967 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:02.073803 kubelet[2779]: E1101 10:04:02.073722 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:02.074834 kubelet[2779]: E1101 10:04:02.074809 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:02.075764 kubelet[2779]: E1101 10:04:02.075706 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7474bcb8-zhrsz" podUID="e3758e25-c85f-48dd-a940-aa84442da027" Nov 1 10:04:02.089853 kubelet[2779]: I1101 10:04:02.088504 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n7rkn" podStartSLOduration=38.088488513 podStartE2EDuration="38.088488513s" podCreationTimestamp="2025-11-01 10:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:02.087064682 +0000 UTC m=+44.433501380" watchObservedRunningTime="2025-11-01 10:04:02.088488513 +0000 UTC m=+44.434925202" Nov 1 10:04:02.102020 systemd[1]: Started cri-containerd-ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109.scope - libcontainer container ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109. Nov 1 10:04:02.129493 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:02.129782 kubelet[2779]: I1101 10:04:02.129576 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ll4v8" podStartSLOduration=38.129555184 podStartE2EDuration="38.129555184s" podCreationTimestamp="2025-11-01 10:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:04:02.117135532 +0000 UTC m=+44.463572220" watchObservedRunningTime="2025-11-01 10:04:02.129555184 +0000 UTC m=+44.475991882" Nov 1 10:04:02.171144 containerd[1608]: time="2025-11-01T10:04:02.171011908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-5vt8f,Uid:36a5d4ac-e857-4a98-81db-164d84811165,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ed87a823acaf0cea88d1494e56882c6895fc9db94fa1f782879bdb2272402109\"" Nov 1 10:04:02.244946 systemd-networkd[1500]: calic8eea12937e: Gained IPv6LL Nov 1 10:04:02.291671 containerd[1608]: time="2025-11-01T10:04:02.291595532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:02.293404 containerd[1608]: time="2025-11-01T10:04:02.293351887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:04:02.293607 containerd[1608]: time="2025-11-01T10:04:02.293481480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:02.293752 kubelet[2779]: E1101 10:04:02.293677 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:02.293805 kubelet[2779]: E1101 10:04:02.293761 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:02.294317 kubelet[2779]: E1101 10:04:02.293995 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:02.294317 kubelet[2779]: E1101 10:04:02.294065 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:04:02.294428 containerd[1608]: time="2025-11-01T10:04:02.294175302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:02.501001 systemd-networkd[1500]: cali74aa82912a9: Gained IPv6LL Nov 1 10:04:02.590855 containerd[1608]: time="2025-11-01T10:04:02.590784493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:02.592773 containerd[1608]: time="2025-11-01T10:04:02.592723121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:02.592841 containerd[1608]: time="2025-11-01T10:04:02.592810675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:02.593118 kubelet[2779]: E1101 10:04:02.593052 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:02.593171 kubelet[2779]: E1101 10:04:02.593124 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:02.593242 kubelet[2779]: E1101 10:04:02.593220 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-5vt8f_calico-apiserver(36a5d4ac-e857-4a98-81db-164d84811165): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:02.593288 kubelet[2779]: E1101 10:04:02.593257 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:02.628952 systemd-networkd[1500]: calic2e9d99d9c7: Gained IPv6LL Nov 1 10:04:03.074954 kubelet[2779]: E1101 10:04:03.074858 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:03.075651 kubelet[2779]: E1101 10:04:03.075204 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:03.075891 kubelet[2779]: E1101 10:04:03.075841 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:03.076509 kubelet[2779]: E1101 10:04:03.076459 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:04:03.170536 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:48710.service - OpenSSH per-connection server daemon (10.0.0.1:48710). Nov 1 10:04:03.254436 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:03.256191 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:03.261230 systemd-logind[1586]: New session 11 of user core. Nov 1 10:04:03.272815 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 10:04:03.358298 sshd[4654]: Connection closed by 10.0.0.1 port 48710 Nov 1 10:04:03.358556 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:03.363615 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:48710.service: Deactivated successfully. Nov 1 10:04:03.365741 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 10:04:03.366588 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Nov 1 10:04:03.367800 systemd-logind[1586]: Removed session 11. Nov 1 10:04:03.892646 containerd[1608]: time="2025-11-01T10:04:03.892591874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2vs7,Uid:73e3568f-83c0-4547-b599-b88c34a1197a,Namespace:calico-system,Attempt:0,}" Nov 1 10:04:03.894242 containerd[1608]: time="2025-11-01T10:04:03.894183149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-dt86s,Uid:c31bf260-9897-44ba-bd03-511f60db4011,Namespace:calico-apiserver,Attempt:0,}" Nov 1 10:04:03.997483 systemd-networkd[1500]: calidbb616280d1: Link UP Nov 1 10:04:03.997794 systemd-networkd[1500]: calidbb616280d1: Gained carrier Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.923 [INFO][4673] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.936 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--h2vs7-eth0 goldmane-7c778bb748- calico-system 73e3568f-83c0-4547-b599-b88c34a1197a 887 0 2025-11-01 10:03:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-h2vs7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidbb616280d1 [] [] }} ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.936 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.962 [INFO][4702] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" HandleID="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Workload="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.962 [INFO][4702] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" HandleID="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Workload="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-h2vs7", "timestamp":"2025-11-01 10:04:03.962335104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.962 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.962 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.962 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.969 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.973 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.976 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.978 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.980 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.980 [INFO][4702] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.981 [INFO][4702] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90 Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.985 [INFO][4702] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4702] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" host="localhost" Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:04.011569 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4702] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" HandleID="k8s-pod-network.64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Workload="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:03.993 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--h2vs7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"73e3568f-83c0-4547-b599-b88c34a1197a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-h2vs7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbb616280d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:03.994 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:03.994 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbb616280d1 ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:03.998 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:03.999 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--h2vs7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"73e3568f-83c0-4547-b599-b88c34a1197a", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90", Pod:"goldmane-7c778bb748-h2vs7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbb616280d1", MAC:"a2:80:d0:9b:97:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:04.012185 containerd[1608]: 2025-11-01 10:04:04.007 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" Namespace="calico-system" Pod="goldmane-7c778bb748-h2vs7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--h2vs7-eth0" Nov 1 10:04:04.036854 systemd-networkd[1500]: calicfb1b7ec98c: Gained IPv6LL Nov 1 10:04:04.068587 containerd[1608]: time="2025-11-01T10:04:04.068514519Z" level=info msg="connecting to shim 64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90" address="unix:///run/containerd/s/15a2e245151382fdabd875dac5a03d640c51456e56b07f4dfd1ad8575ae38845" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:04.079683 kubelet[2779]: E1101 10:04:04.079278 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:04.081366 kubelet[2779]: E1101 10:04:04.081307 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:04.126220 systemd[1]: Started cri-containerd-64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90.scope - libcontainer container 64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90. Nov 1 10:04:04.137719 systemd-networkd[1500]: califdf33ce9510: Link UP Nov 1 10:04:04.140834 systemd-networkd[1500]: califdf33ce9510: Gained carrier Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.928 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.940 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0 calico-apiserver-7f4f8b5f58- calico-apiserver c31bf260-9897-44ba-bd03-511f60db4011 890 0 2025-11-01 10:03:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f4f8b5f58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f4f8b5f58-dt86s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califdf33ce9510 [] [] }} ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.940 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.966 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" HandleID="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.967 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" HandleID="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f4f8b5f58-dt86s", "timestamp":"2025-11-01 10:04:03.96697009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.967 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:03.990 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.070 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.077 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.085 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.088 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.100 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.100 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.103 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7 Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.118 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.127 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.127 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" host="localhost" Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.127 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 10:04:04.175673 containerd[1608]: 2025-11-01 10:04:04.127 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" HandleID="k8s-pod-network.06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Workload="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.133 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0", GenerateName:"calico-apiserver-7f4f8b5f58-", Namespace:"calico-apiserver", SelfLink:"", UID:"c31bf260-9897-44ba-bd03-511f60db4011", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f8b5f58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f4f8b5f58-dt86s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdf33ce9510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.133 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.133 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdf33ce9510 ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.141 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.141 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0", GenerateName:"calico-apiserver-7f4f8b5f58-", Namespace:"calico-apiserver", SelfLink:"", UID:"c31bf260-9897-44ba-bd03-511f60db4011", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 10, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f4f8b5f58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7", Pod:"calico-apiserver-7f4f8b5f58-dt86s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califdf33ce9510", MAC:"aa:79:54:6e:9a:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 10:04:04.177963 containerd[1608]: 2025-11-01 10:04:04.162 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" Namespace="calico-apiserver" Pod="calico-apiserver-7f4f8b5f58-dt86s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f4f8b5f58--dt86s-eth0" Nov 1 10:04:04.231782 containerd[1608]: time="2025-11-01T10:04:04.231716972Z" level=info msg="connecting to shim 06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7" address="unix:///run/containerd/s/c19c5f5fede9de48d39dabae254c3833713877ce061bcf5d8bfbae5692b2ec4d" namespace=k8s.io protocol=ttrpc version=3 Nov 1 10:04:04.234854 kubelet[2779]: I1101 10:04:04.234820 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:04:04.236092 kubelet[2779]: E1101 10:04:04.235772 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:04.240631 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:04.291103 systemd[1]: Started cri-containerd-06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7.scope - libcontainer container 06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7. Nov 1 10:04:04.297588 containerd[1608]: time="2025-11-01T10:04:04.297544591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2vs7,Uid:73e3568f-83c0-4547-b599-b88c34a1197a,Namespace:calico-system,Attempt:0,} returns sandbox id \"64949a830912b9f131f678803a8408d1187ec550c319d7d1b195f545b4670e90\"" Nov 1 10:04:04.300447 containerd[1608]: time="2025-11-01T10:04:04.300393016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:04:04.323543 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 10:04:04.380923 containerd[1608]: time="2025-11-01T10:04:04.380873104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f4f8b5f58-dt86s,Uid:c31bf260-9897-44ba-bd03-511f60db4011,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"06a8ddea6c926db16dc516de296509a444d274965b84df73113cf213b4820ff7\"" Nov 1 10:04:04.633436 containerd[1608]: time="2025-11-01T10:04:04.633371625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:04.635393 containerd[1608]: time="2025-11-01T10:04:04.635244779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:04:04.635393 containerd[1608]: time="2025-11-01T10:04:04.635283652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:04.635713 kubelet[2779]: E1101 10:04:04.635548 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:04.635793 kubelet[2779]: E1101 10:04:04.635726 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:04.636710 containerd[1608]: time="2025-11-01T10:04:04.636111155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:04.636786 kubelet[2779]: E1101 10:04:04.636063 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2vs7_calico-system(73e3568f-83c0-4547-b599-b88c34a1197a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:04.636786 kubelet[2779]: E1101 10:04:04.636207 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:04.996077 containerd[1608]: time="2025-11-01T10:04:04.996022192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:04.997311 containerd[1608]: time="2025-11-01T10:04:04.997267648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:04.997363 containerd[1608]: time="2025-11-01T10:04:04.997308635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:04.997610 kubelet[2779]: E1101 10:04:04.997558 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:04.997714 kubelet[2779]: E1101 10:04:04.997624 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:04.997773 kubelet[2779]: E1101 10:04:04.997750 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-dt86s_calico-apiserver(c31bf260-9897-44ba-bd03-511f60db4011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:04.997816 kubelet[2779]: E1101 10:04:04.997788 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:05.082575 kubelet[2779]: E1101 10:04:05.082507 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:05.084583 kubelet[2779]: E1101 10:04:05.084043 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:05.188890 systemd-networkd[1500]: califdf33ce9510: Gained IPv6LL Nov 1 10:04:05.317047 systemd-networkd[1500]: calidbb616280d1: Gained IPv6LL Nov 1 10:04:06.095225 kubelet[2779]: E1101 10:04:06.095156 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:06.095225 kubelet[2779]: E1101 10:04:06.095196 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:06.666917 kubelet[2779]: I1101 10:04:06.666851 2779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 10:04:06.667378 kubelet[2779]: E1101 10:04:06.667318 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:07.098205 kubelet[2779]: E1101 10:04:07.098016 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:07.768071 systemd-networkd[1500]: vxlan.calico: Link UP Nov 1 10:04:07.768085 systemd-networkd[1500]: vxlan.calico: Gained carrier Nov 1 10:04:08.375814 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:48726.service - OpenSSH per-connection server daemon (10.0.0.1:48726). Nov 1 10:04:08.437581 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 48726 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:08.444962 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:08.449687 systemd-logind[1586]: New session 12 of user core. Nov 1 10:04:08.458870 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 10:04:08.821003 sshd[5096]: Connection closed by 10.0.0.1 port 48726 Nov 1 10:04:08.821361 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:08.827120 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Nov 1 10:04:08.827707 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:48726.service: Deactivated successfully. Nov 1 10:04:08.830376 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 10:04:08.833210 systemd-logind[1586]: Removed session 12. Nov 1 10:04:09.732901 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Nov 1 10:04:13.840937 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Nov 1 10:04:13.910257 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:13.912092 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:13.916844 systemd-logind[1586]: New session 13 of user core. Nov 1 10:04:13.926902 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 10:04:14.009844 sshd[5125]: Connection closed by 10.0.0.1 port 58448 Nov 1 10:04:14.010178 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:14.019751 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:58448.service: Deactivated successfully. Nov 1 10:04:14.021667 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 10:04:14.022619 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Nov 1 10:04:14.026049 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:58450.service - OpenSSH per-connection server daemon (10.0.0.1:58450). Nov 1 10:04:14.026831 systemd-logind[1586]: Removed session 13. Nov 1 10:04:14.084904 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:14.086660 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:14.092081 systemd-logind[1586]: New session 14 of user core. Nov 1 10:04:14.099869 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 10:04:14.223475 sshd[5143]: Connection closed by 10.0.0.1 port 58450 Nov 1 10:04:14.223940 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:14.233734 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:58450.service: Deactivated successfully. Nov 1 10:04:14.237641 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 10:04:14.239835 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Nov 1 10:04:14.246187 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:58458.service - OpenSSH per-connection server daemon (10.0.0.1:58458). Nov 1 10:04:14.248391 systemd-logind[1586]: Removed session 14. Nov 1 10:04:14.302316 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:14.304361 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:14.309800 systemd-logind[1586]: New session 15 of user core. Nov 1 10:04:14.323914 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 10:04:14.407018 sshd[5158]: Connection closed by 10.0.0.1 port 58458 Nov 1 10:04:14.407305 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:14.412231 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:58458.service: Deactivated successfully. Nov 1 10:04:14.414352 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 10:04:14.415355 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Nov 1 10:04:14.416510 systemd-logind[1586]: Removed session 15. Nov 1 10:04:15.893497 containerd[1608]: time="2025-11-01T10:04:15.893341294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:04:16.256029 containerd[1608]: time="2025-11-01T10:04:16.255944137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:16.282522 containerd[1608]: time="2025-11-01T10:04:16.282444252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:04:16.282620 containerd[1608]: time="2025-11-01T10:04:16.282462267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:16.282847 kubelet[2779]: E1101 10:04:16.282766 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:16.282847 kubelet[2779]: E1101 10:04:16.282832 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:16.283484 kubelet[2779]: E1101 10:04:16.282946 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-799ff88558-vv4cn_calico-system(99cd5c6d-98ce-4f16-8916-17196a6ab807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:16.283484 kubelet[2779]: E1101 10:04:16.282988 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:16.890420 containerd[1608]: time="2025-11-01T10:04:16.890100521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:04:17.219319 containerd[1608]: time="2025-11-01T10:04:17.219273841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:17.228663 containerd[1608]: time="2025-11-01T10:04:17.228602976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:04:17.228846 containerd[1608]: time="2025-11-01T10:04:17.228711354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:17.229106 kubelet[2779]: E1101 10:04:17.229023 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:17.229106 kubelet[2779]: E1101 10:04:17.229099 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:17.229244 kubelet[2779]: E1101 10:04:17.229192 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:17.230580 containerd[1608]: time="2025-11-01T10:04:17.230408912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:04:17.616397 containerd[1608]: time="2025-11-01T10:04:17.616229209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:17.617851 containerd[1608]: time="2025-11-01T10:04:17.617795365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:04:17.617948 containerd[1608]: time="2025-11-01T10:04:17.617852294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:17.618129 kubelet[2779]: E1101 10:04:17.618074 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:17.618518 kubelet[2779]: E1101 10:04:17.618129 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:17.618518 kubelet[2779]: E1101 10:04:17.618223 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:17.618518 kubelet[2779]: E1101 10:04:17.618267 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:04:17.894406 containerd[1608]: time="2025-11-01T10:04:17.893882888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:04:18.208354 containerd[1608]: time="2025-11-01T10:04:18.208180446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:18.209466 containerd[1608]: time="2025-11-01T10:04:18.209392411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:04:18.209520 containerd[1608]: time="2025-11-01T10:04:18.209478777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:18.209751 kubelet[2779]: E1101 10:04:18.209676 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:18.209802 kubelet[2779]: E1101 10:04:18.209757 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:18.210401 kubelet[2779]: E1101 10:04:18.210003 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:18.210448 containerd[1608]: time="2025-11-01T10:04:18.210148491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:18.575335 containerd[1608]: time="2025-11-01T10:04:18.575252908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:18.577984 containerd[1608]: time="2025-11-01T10:04:18.577880918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:18.578210 containerd[1608]: time="2025-11-01T10:04:18.577913611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:18.578333 kubelet[2779]: E1101 10:04:18.578279 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:18.578421 kubelet[2779]: E1101 10:04:18.578343 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:18.578721 kubelet[2779]: E1101 10:04:18.578611 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-5vt8f_calico-apiserver(36a5d4ac-e857-4a98-81db-164d84811165): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:18.578721 kubelet[2779]: E1101 10:04:18.578660 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:18.579167 containerd[1608]: time="2025-11-01T10:04:18.578737751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:04:18.924423 containerd[1608]: time="2025-11-01T10:04:18.924256630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:18.925914 containerd[1608]: time="2025-11-01T10:04:18.925851910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:04:18.925962 containerd[1608]: time="2025-11-01T10:04:18.925951472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:18.926215 kubelet[2779]: E1101 10:04:18.926165 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:18.926590 kubelet[2779]: E1101 10:04:18.926230 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:18.926590 kubelet[2779]: E1101 10:04:18.926338 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:18.926590 kubelet[2779]: E1101 10:04:18.926385 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7474bcb8-zhrsz" podUID="e3758e25-c85f-48dd-a940-aa84442da027" Nov 1 10:04:19.421720 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:58460.service - OpenSSH per-connection server daemon (10.0.0.1:58460). Nov 1 10:04:19.486718 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 58460 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:19.488823 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:19.493639 systemd-logind[1586]: New session 16 of user core. Nov 1 10:04:19.503834 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 10:04:19.586095 sshd[5189]: Connection closed by 10.0.0.1 port 58460 Nov 1 10:04:19.586445 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:19.591754 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:58460.service: Deactivated successfully. Nov 1 10:04:19.594060 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 10:04:19.595006 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Nov 1 10:04:19.596366 systemd-logind[1586]: Removed session 16. Nov 1 10:04:19.891835 containerd[1608]: time="2025-11-01T10:04:19.891529692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:20.229476 containerd[1608]: time="2025-11-01T10:04:20.229388712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:20.261240 containerd[1608]: time="2025-11-01T10:04:20.261142687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:20.261240 containerd[1608]: time="2025-11-01T10:04:20.261213182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:20.261562 kubelet[2779]: E1101 10:04:20.261485 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:20.261562 kubelet[2779]: E1101 10:04:20.261558 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:20.262055 kubelet[2779]: E1101 10:04:20.261661 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-dt86s_calico-apiserver(c31bf260-9897-44ba-bd03-511f60db4011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:20.262146 kubelet[2779]: E1101 10:04:20.261718 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:20.890171 containerd[1608]: time="2025-11-01T10:04:20.890092058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:04:21.214620 containerd[1608]: time="2025-11-01T10:04:21.214439290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:21.279994 containerd[1608]: time="2025-11-01T10:04:21.279898618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:04:21.280225 containerd[1608]: time="2025-11-01T10:04:21.279954344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:21.280376 kubelet[2779]: E1101 10:04:21.280304 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:21.280775 kubelet[2779]: E1101 10:04:21.280387 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:21.280775 kubelet[2779]: E1101 10:04:21.280491 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2vs7_calico-system(73e3568f-83c0-4547-b599-b88c34a1197a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:21.280775 kubelet[2779]: E1101 10:04:21.280536 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:24.604551 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:53274.service - OpenSSH per-connection server daemon (10.0.0.1:53274). Nov 1 10:04:24.666514 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 53274 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:24.668413 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:24.673365 systemd-logind[1586]: New session 17 of user core. Nov 1 10:04:24.679869 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 10:04:24.757099 sshd[5207]: Connection closed by 10.0.0.1 port 53274 Nov 1 10:04:24.757480 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:24.762495 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:53274.service: Deactivated successfully. Nov 1 10:04:24.764932 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 10:04:24.766830 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Nov 1 10:04:24.768011 systemd-logind[1586]: Removed session 17. Nov 1 10:04:26.890896 kubelet[2779]: E1101 10:04:26.890753 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:27.891919 kubelet[2779]: E1101 10:04:27.891816 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:04:29.776460 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:53288.service - OpenSSH per-connection server daemon (10.0.0.1:53288). Nov 1 10:04:29.830135 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 53288 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:29.831923 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:29.836420 systemd-logind[1586]: New session 18 of user core. Nov 1 10:04:29.846845 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 10:04:29.932864 sshd[5231]: Connection closed by 10.0.0.1 port 53288 Nov 1 10:04:29.933208 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:29.938176 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:53288.service: Deactivated successfully. Nov 1 10:04:29.940340 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 10:04:29.941212 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Nov 1 10:04:29.942435 systemd-logind[1586]: Removed session 18. Nov 1 10:04:30.890766 kubelet[2779]: E1101 10:04:30.890684 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:32.890814 kubelet[2779]: E1101 10:04:32.890739 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:32.891777 kubelet[2779]: E1101 10:04:32.891733 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7474bcb8-zhrsz" podUID="e3758e25-c85f-48dd-a940-aa84442da027" Nov 1 10:04:34.482734 kubelet[2779]: E1101 10:04:34.482244 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:34.945466 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Nov 1 10:04:34.992072 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:34.993493 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:34.998177 systemd-logind[1586]: New session 19 of user core. Nov 1 10:04:35.005850 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 10:04:35.082287 sshd[5277]: Connection closed by 10.0.0.1 port 45224 Nov 1 10:04:35.082638 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:35.096339 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:45224.service: Deactivated successfully. Nov 1 10:04:35.098631 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 10:04:35.099671 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Nov 1 10:04:35.103154 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Nov 1 10:04:35.103823 systemd-logind[1586]: Removed session 19. Nov 1 10:04:35.166999 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:35.169015 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:35.174460 systemd-logind[1586]: New session 20 of user core. Nov 1 10:04:35.180889 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 10:04:35.574438 sshd[5293]: Connection closed by 10.0.0.1 port 45230 Nov 1 10:04:35.574823 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:35.588515 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:45230.service: Deactivated successfully. Nov 1 10:04:35.591061 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 10:04:35.592076 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Nov 1 10:04:35.595509 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:45234.service - OpenSSH per-connection server daemon (10.0.0.1:45234). Nov 1 10:04:35.596304 systemd-logind[1586]: Removed session 20. Nov 1 10:04:35.681371 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 45234 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:35.683414 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:35.688791 systemd-logind[1586]: New session 21 of user core. Nov 1 10:04:35.700878 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 10:04:35.889911 kubelet[2779]: E1101 10:04:35.889125 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:35.889911 kubelet[2779]: E1101 10:04:35.889673 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:36.307612 sshd[5309]: Connection closed by 10.0.0.1 port 45234 Nov 1 10:04:36.308194 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:36.319079 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:45234.service: Deactivated successfully. Nov 1 10:04:36.321918 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 10:04:36.324909 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Nov 1 10:04:36.329502 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:45238.service - OpenSSH per-connection server daemon (10.0.0.1:45238). Nov 1 10:04:36.331368 systemd-logind[1586]: Removed session 21. Nov 1 10:04:36.390954 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 45238 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:36.392924 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:36.399884 systemd-logind[1586]: New session 22 of user core. Nov 1 10:04:36.408860 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 10:04:36.595502 sshd[5333]: Connection closed by 10.0.0.1 port 45238 Nov 1 10:04:36.596013 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:36.607841 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:45238.service: Deactivated successfully. Nov 1 10:04:36.611290 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 10:04:36.612707 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Nov 1 10:04:36.615853 systemd-logind[1586]: Removed session 22. Nov 1 10:04:36.617157 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:45250.service - OpenSSH per-connection server daemon (10.0.0.1:45250). Nov 1 10:04:36.671285 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 45250 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:36.672826 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:36.681087 systemd-logind[1586]: New session 23 of user core. Nov 1 10:04:36.691913 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 10:04:36.778158 sshd[5348]: Connection closed by 10.0.0.1 port 45250 Nov 1 10:04:36.778477 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:36.784681 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:45250.service: Deactivated successfully. Nov 1 10:04:36.786903 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 10:04:36.788351 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Nov 1 10:04:36.790169 systemd-logind[1586]: Removed session 23. Nov 1 10:04:38.890938 containerd[1608]: time="2025-11-01T10:04:38.890889639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 10:04:39.287410 containerd[1608]: time="2025-11-01T10:04:39.287347907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:39.288756 containerd[1608]: time="2025-11-01T10:04:39.288684638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 10:04:39.288850 containerd[1608]: time="2025-11-01T10:04:39.288805427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:39.289296 kubelet[2779]: E1101 10:04:39.289242 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:39.289667 kubelet[2779]: E1101 10:04:39.289309 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 10:04:39.289667 kubelet[2779]: E1101 10:04:39.289419 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:39.291090 containerd[1608]: time="2025-11-01T10:04:39.291060401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 10:04:39.606608 containerd[1608]: time="2025-11-01T10:04:39.606445125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:39.607726 containerd[1608]: time="2025-11-01T10:04:39.607597805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 10:04:39.607726 containerd[1608]: time="2025-11-01T10:04:39.607652448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:39.608171 kubelet[2779]: E1101 10:04:39.608125 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:39.608238 kubelet[2779]: E1101 10:04:39.608178 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 10:04:39.608329 kubelet[2779]: E1101 10:04:39.608304 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-87p4w_calico-system(f1319238-e7a7-4b12-ace8-ba38b42b1817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:39.608387 kubelet[2779]: E1101 10:04:39.608349 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-87p4w" podUID="f1319238-e7a7-4b12-ace8-ba38b42b1817" Nov 1 10:04:39.892441 kubelet[2779]: E1101 10:04:39.891906 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:39.894504 containerd[1608]: time="2025-11-01T10:04:39.894241600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 10:04:40.199527 containerd[1608]: time="2025-11-01T10:04:40.199358476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:40.201547 containerd[1608]: time="2025-11-01T10:04:40.201483072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 10:04:40.201754 containerd[1608]: time="2025-11-01T10:04:40.201597820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:40.201961 kubelet[2779]: E1101 10:04:40.201793 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:40.201961 kubelet[2779]: E1101 10:04:40.201855 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 10:04:40.202027 kubelet[2779]: E1101 10:04:40.201955 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-799ff88558-vv4cn_calico-system(99cd5c6d-98ce-4f16-8916-17196a6ab807): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:40.202027 kubelet[2779]: E1101 10:04:40.201997 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:41.790923 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:45254.service - OpenSSH per-connection server daemon (10.0.0.1:45254). Nov 1 10:04:41.875380 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 45254 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:41.877337 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:41.882213 systemd-logind[1586]: New session 24 of user core. Nov 1 10:04:41.892855 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 10:04:41.988973 sshd[5366]: Connection closed by 10.0.0.1 port 45254 Nov 1 10:04:41.989298 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:41.992868 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:45254.service: Deactivated successfully. Nov 1 10:04:41.995321 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 10:04:41.996949 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Nov 1 10:04:41.998470 systemd-logind[1586]: Removed session 24. Nov 1 10:04:43.893884 kubelet[2779]: E1101 10:04:43.893315 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:43.898720 containerd[1608]: time="2025-11-01T10:04:43.897052770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 10:04:44.207020 containerd[1608]: time="2025-11-01T10:04:44.206856633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:44.208345 containerd[1608]: time="2025-11-01T10:04:44.208286676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 10:04:44.208501 containerd[1608]: time="2025-11-01T10:04:44.208302045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:44.208593 kubelet[2779]: E1101 10:04:44.208547 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:44.208652 kubelet[2779]: E1101 10:04:44.208606 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 10:04:44.208761 kubelet[2779]: E1101 10:04:44.208735 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:44.210228 containerd[1608]: time="2025-11-01T10:04:44.210196810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 10:04:44.517447 containerd[1608]: time="2025-11-01T10:04:44.517371752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:44.518796 containerd[1608]: time="2025-11-01T10:04:44.518721973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 10:04:44.518994 containerd[1608]: time="2025-11-01T10:04:44.518851969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:44.519223 kubelet[2779]: E1101 10:04:44.519150 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:44.519223 kubelet[2779]: E1101 10:04:44.519227 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 10:04:44.519425 kubelet[2779]: E1101 10:04:44.519346 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-b7474bcb8-zhrsz_calico-system(e3758e25-c85f-48dd-a940-aa84442da027): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:44.519425 kubelet[2779]: E1101 10:04:44.519391 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b7474bcb8-zhrsz" podUID="e3758e25-c85f-48dd-a940-aa84442da027" Nov 1 10:04:45.894399 containerd[1608]: time="2025-11-01T10:04:45.894335892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:46.277104 containerd[1608]: time="2025-11-01T10:04:46.277032728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:46.281548 containerd[1608]: time="2025-11-01T10:04:46.281498456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:46.281618 containerd[1608]: time="2025-11-01T10:04:46.281509557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:46.281864 kubelet[2779]: E1101 10:04:46.281808 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:46.282258 kubelet[2779]: E1101 10:04:46.281867 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:46.282258 kubelet[2779]: E1101 10:04:46.281958 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-dt86s_calico-apiserver(c31bf260-9897-44ba-bd03-511f60db4011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:46.282258 kubelet[2779]: E1101 10:04:46.281993 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-dt86s" podUID="c31bf260-9897-44ba-bd03-511f60db4011" Nov 1 10:04:46.890428 containerd[1608]: time="2025-11-01T10:04:46.890348583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 10:04:47.005976 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:41462.service - OpenSSH per-connection server daemon (10.0.0.1:41462). Nov 1 10:04:47.069702 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 41462 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:47.071548 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:47.076206 systemd-logind[1586]: New session 25 of user core. Nov 1 10:04:47.082947 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 10:04:47.169127 sshd[5384]: Connection closed by 10.0.0.1 port 41462 Nov 1 10:04:47.171320 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:47.176195 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:41462.service: Deactivated successfully. Nov 1 10:04:47.178377 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 10:04:47.179443 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Nov 1 10:04:47.181341 systemd-logind[1586]: Removed session 25. Nov 1 10:04:47.187602 containerd[1608]: time="2025-11-01T10:04:47.187537461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:47.188762 containerd[1608]: time="2025-11-01T10:04:47.188721756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 10:04:47.188912 containerd[1608]: time="2025-11-01T10:04:47.188802439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:47.189023 kubelet[2779]: E1101 10:04:47.188974 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:47.189074 kubelet[2779]: E1101 10:04:47.189037 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 10:04:47.189149 kubelet[2779]: E1101 10:04:47.189127 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7f4f8b5f58-5vt8f_calico-apiserver(36a5d4ac-e857-4a98-81db-164d84811165): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:47.189194 kubelet[2779]: E1101 10:04:47.189167 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f4f8b5f58-5vt8f" podUID="36a5d4ac-e857-4a98-81db-164d84811165" Nov 1 10:04:47.890143 kubelet[2779]: E1101 10:04:47.890088 2779 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 10:04:47.893485 containerd[1608]: time="2025-11-01T10:04:47.893272799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 10:04:48.258234 containerd[1608]: time="2025-11-01T10:04:48.258163345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 10:04:48.259563 containerd[1608]: time="2025-11-01T10:04:48.259528482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 10:04:48.259657 containerd[1608]: time="2025-11-01T10:04:48.259582874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 1 10:04:48.259908 kubelet[2779]: E1101 10:04:48.259843 2779 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:48.259908 kubelet[2779]: E1101 10:04:48.259904 2779 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 10:04:48.260215 kubelet[2779]: E1101 10:04:48.259997 2779 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2vs7_calico-system(73e3568f-83c0-4547-b599-b88c34a1197a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 10:04:48.260215 kubelet[2779]: E1101 10:04:48.260036 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2vs7" podUID="73e3568f-83c0-4547-b599-b88c34a1197a" Nov 1 10:04:50.891041 kubelet[2779]: E1101 10:04:50.890970 2779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-799ff88558-vv4cn" podUID="99cd5c6d-98ce-4f16-8916-17196a6ab807" Nov 1 10:04:52.189569 systemd[1]: Started sshd@25-10.0.0.64:22-10.0.0.1:41474.service - OpenSSH per-connection server daemon (10.0.0.1:41474). Nov 1 10:04:52.262713 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 41474 ssh2: RSA SHA256:ka1Waf/EnFdMzWNpUvsADTzjgcbA0C+uOQYPAI4nGO0 Nov 1 10:04:52.265002 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 10:04:52.276270 systemd-logind[1586]: New session 26 of user core. Nov 1 10:04:52.281889 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 10:04:52.412999 sshd[5409]: Connection closed by 10.0.0.1 port 41474 Nov 1 10:04:52.413372 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Nov 1 10:04:52.418976 systemd[1]: sshd@25-10.0.0.64:22-10.0.0.1:41474.service: Deactivated successfully. Nov 1 10:04:52.421365 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 10:04:52.422928 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Nov 1 10:04:52.424167 systemd-logind[1586]: Removed session 26.