Jan 20 03:06:03.112590 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 03:06:03.112620 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:06:03.112636 kernel: BIOS-provided physical RAM map: Jan 20 03:06:03.112644 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 03:06:03.112652 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 03:06:03.112661 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 03:06:03.112670 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 03:06:03.112679 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 03:06:03.112687 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 03:06:03.112695 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 03:06:03.112704 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 03:06:03.112715 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 03:06:03.112723 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 03:06:03.112732 kernel: NX (Execute Disable) protection: active Jan 20 03:06:03.112742 kernel: APIC: Static calls initialized Jan 20 03:06:03.112751 kernel: SMBIOS 2.8 present. Jan 20 03:06:03.112763 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 03:06:03.112771 kernel: DMI: Memory slots populated: 1/1 Jan 20 03:06:03.112780 kernel: Hypervisor detected: KVM Jan 20 03:06:03.112789 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 03:06:03.112798 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 03:06:03.112807 kernel: kvm-clock: using sched offset of 8882939913 cycles Jan 20 03:06:03.112817 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 03:06:03.112826 kernel: tsc: Detected 2445.426 MHz processor Jan 20 03:06:03.112835 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 03:06:03.112845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 03:06:03.112857 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 03:06:03.112866 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 03:06:03.112926 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 03:06:03.112937 kernel: Using GB pages for direct mapping Jan 20 03:06:03.112946 kernel: ACPI: Early table checksum verification disabled Jan 20 03:06:03.112955 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 03:06:03.112964 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.112974 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.112983 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.112996 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 03:06:03.113006 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.113015 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.113024 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.113033 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:06:03.113047 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 03:06:03.113060 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 03:06:03.113070 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 03:06:03.113080 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 03:06:03.113090 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 03:06:03.113099 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 03:06:03.113109 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 03:06:03.113118 kernel: No NUMA configuration found Jan 20 03:06:03.113128 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 03:06:03.113141 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 03:06:03.113150 kernel: Zone ranges: Jan 20 03:06:03.113160 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 03:06:03.113170 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 03:06:03.113179 kernel: Normal empty Jan 20 03:06:03.113189 kernel: Device empty Jan 20 03:06:03.113199 kernel: Movable zone start for each node Jan 20 03:06:03.113208 kernel: Early memory node ranges Jan 20 03:06:03.113218 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 03:06:03.113227 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 03:06:03.113239 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 03:06:03.113249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 03:06:03.113259 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 03:06:03.113268 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 03:06:03.113278 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 03:06:03.113287 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 03:06:03.113297 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 03:06:03.113307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 03:06:03.113317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 03:06:03.113329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 03:06:03.113339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 03:06:03.113349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 03:06:03.113358 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 03:06:03.113368 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 03:06:03.113377 kernel: TSC deadline timer available Jan 20 03:06:03.113387 kernel: CPU topo: Max. logical packages: 1 Jan 20 03:06:03.113397 kernel: CPU topo: Max. logical dies: 1 Jan 20 03:06:03.113406 kernel: CPU topo: Max. dies per package: 1 Jan 20 03:06:03.113418 kernel: CPU topo: Max. threads per core: 1 Jan 20 03:06:03.113428 kernel: CPU topo: Num. cores per package: 4 Jan 20 03:06:03.113437 kernel: CPU topo: Num. threads per package: 4 Jan 20 03:06:03.113446 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 03:06:03.113456 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 03:06:03.113465 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 03:06:03.113506 kernel: kvm-guest: setup PV sched yield Jan 20 03:06:03.113516 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 03:06:03.113526 kernel: Booting paravirtualized kernel on KVM Jan 20 03:06:03.113536 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 03:06:03.113549 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 03:06:03.113559 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 03:06:03.113569 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 03:06:03.113578 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 03:06:03.113588 kernel: kvm-guest: PV spinlocks enabled Jan 20 03:06:03.113617 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 03:06:03.113628 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:06:03.113638 kernel: random: crng init done Jan 20 03:06:03.113651 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 03:06:03.113661 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 03:06:03.113689 kernel: Fallback order for Node 0: 0 Jan 20 03:06:03.113700 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 03:06:03.113709 kernel: Policy zone: DMA32 Jan 20 03:06:03.113719 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 03:06:03.113728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 03:06:03.113739 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 03:06:03.113766 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 03:06:03.113779 kernel: Dynamic Preempt: voluntary Jan 20 03:06:03.113789 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 03:06:03.113800 kernel: rcu: RCU event tracing is enabled. Jan 20 03:06:03.113810 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 03:06:03.113820 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 03:06:03.113831 kernel: Rude variant of Tasks RCU enabled. Jan 20 03:06:03.113841 kernel: Tracing variant of Tasks RCU enabled. Jan 20 03:06:03.113851 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 03:06:03.113861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 03:06:03.113870 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:06:03.113951 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:06:03.113963 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:06:03.113973 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 03:06:03.113984 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 03:06:03.114003 kernel: Console: colour VGA+ 80x25 Jan 20 03:06:03.114016 kernel: printk: legacy console [ttyS0] enabled Jan 20 03:06:03.114027 kernel: ACPI: Core revision 20240827 Jan 20 03:06:03.114038 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 03:06:03.114048 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 03:06:03.114058 kernel: x2apic enabled Jan 20 03:06:03.114069 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 03:06:03.114085 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 03:06:03.114118 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 03:06:03.114130 kernel: kvm-guest: setup PV IPIs Jan 20 03:06:03.114161 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 03:06:03.114172 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:06:03.114188 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 03:06:03.114198 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 03:06:03.114209 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 03:06:03.114220 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 03:06:03.114231 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 03:06:03.114242 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 03:06:03.114253 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 03:06:03.114264 kernel: Speculative Store Bypass: Vulnerable Jan 20 03:06:03.114275 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 03:06:03.114291 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 03:06:03.114302 kernel: active return thunk: srso_alias_return_thunk Jan 20 03:06:03.114313 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 03:06:03.114324 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 03:06:03.114335 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 03:06:03.114347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 03:06:03.114358 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 03:06:03.114369 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 03:06:03.114383 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 03:06:03.114395 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 03:06:03.114406 kernel: Freeing SMP alternatives memory: 32K Jan 20 03:06:03.114417 kernel: pid_max: default: 32768 minimum: 301 Jan 20 03:06:03.114428 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 03:06:03.114439 kernel: landlock: Up and running. Jan 20 03:06:03.114450 kernel: SELinux: Initializing. Jan 20 03:06:03.114461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:06:03.114472 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:06:03.114531 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 03:06:03.114543 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 03:06:03.114554 kernel: signal: max sigframe size: 1776 Jan 20 03:06:03.114565 kernel: rcu: Hierarchical SRCU implementation. Jan 20 03:06:03.114577 kernel: rcu: Max phase no-delay instances is 400. Jan 20 03:06:03.114588 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 03:06:03.114599 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 03:06:03.114611 kernel: smp: Bringing up secondary CPUs ... Jan 20 03:06:03.114623 kernel: smpboot: x86: Booting SMP configuration: Jan 20 03:06:03.114638 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 03:06:03.114718 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 03:06:03.114733 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 03:06:03.114746 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 03:06:03.114758 kernel: devtmpfs: initialized Jan 20 03:06:03.114769 kernel: x86/mm: Memory block size: 128MB Jan 20 03:06:03.114780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 03:06:03.114792 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 03:06:03.114803 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 03:06:03.114818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 03:06:03.114829 kernel: audit: initializing netlink subsys (disabled) Jan 20 03:06:03.114840 kernel: audit: type=2000 audit(1768878358.399:1): state=initialized audit_enabled=0 res=1 Jan 20 03:06:03.114851 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 03:06:03.114862 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 03:06:03.114923 kernel: cpuidle: using governor menu Jan 20 03:06:03.114936 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 03:06:03.114948 kernel: dca service started, version 1.12.1 Jan 20 03:06:03.114959 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 03:06:03.114975 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 03:06:03.114986 kernel: PCI: Using configuration type 1 for base access Jan 20 03:06:03.114997 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 03:06:03.115008 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 03:06:03.115018 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 03:06:03.115029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 03:06:03.115041 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 03:06:03.115052 kernel: ACPI: Added _OSI(Module Device) Jan 20 03:06:03.115063 kernel: ACPI: Added _OSI(Processor Device) Jan 20 03:06:03.115077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 03:06:03.115089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 03:06:03.115099 kernel: ACPI: Interpreter enabled Jan 20 03:06:03.115111 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 03:06:03.115122 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 03:06:03.115133 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 03:06:03.115144 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 03:06:03.115156 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 03:06:03.115167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 03:06:03.115411 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 03:06:03.115631 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 03:06:03.115822 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 03:06:03.115838 kernel: PCI host bridge to bus 0000:00 Jan 20 03:06:03.116052 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 03:06:03.116194 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 03:06:03.116336 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 03:06:03.116470 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 03:06:03.116655 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 03:06:03.116792 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 03:06:03.117012 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 03:06:03.117262 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 03:06:03.117433 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 03:06:03.117650 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 03:06:03.117809 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 03:06:03.118025 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 03:06:03.118177 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 03:06:03.118336 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 03:06:03.118527 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 03:06:03.118686 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 03:06:03.118834 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 03:06:03.119047 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 03:06:03.119200 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 03:06:03.119349 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 03:06:03.119541 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 03:06:03.119708 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 03:06:03.119863 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 03:06:03.120072 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 03:06:03.120220 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 03:06:03.120368 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 03:06:03.120575 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 03:06:03.120728 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 03:06:03.120947 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 03:06:03.121127 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 03:06:03.121296 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 03:06:03.121468 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 03:06:03.121678 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 03:06:03.121693 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 03:06:03.121705 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 03:06:03.121716 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 03:06:03.121731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 03:06:03.121742 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 03:06:03.121753 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 03:06:03.121764 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 03:06:03.121775 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 03:06:03.121785 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 03:06:03.121796 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 03:06:03.121806 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 03:06:03.121817 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 03:06:03.121832 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 03:06:03.121868 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 03:06:03.122076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 03:06:03.122091 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 03:06:03.122102 kernel: iommu: Default domain type: Translated Jan 20 03:06:03.122114 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 03:06:03.122125 kernel: PCI: Using ACPI for IRQ routing Jan 20 03:06:03.122137 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 03:06:03.122149 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 03:06:03.122165 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 03:06:03.122342 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 03:06:03.122563 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 03:06:03.122731 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 03:06:03.122747 kernel: vgaarb: loaded Jan 20 03:06:03.122759 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 03:06:03.122770 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 03:06:03.122782 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 03:06:03.122793 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 03:06:03.122808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 03:06:03.122819 kernel: pnp: PnP ACPI init Jan 20 03:06:03.123062 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 03:06:03.123082 kernel: pnp: PnP ACPI: found 6 devices Jan 20 03:06:03.123094 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 03:06:03.123105 kernel: NET: Registered PF_INET protocol family Jan 20 03:06:03.123117 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 03:06:03.123128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 03:06:03.123143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 03:06:03.123155 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 03:06:03.123167 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 03:06:03.123178 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 03:06:03.123189 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:06:03.123201 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:06:03.123212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 03:06:03.123223 kernel: NET: Registered PF_XDP protocol family Jan 20 03:06:03.123384 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 03:06:03.123586 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 03:06:03.123740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 03:06:03.123950 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 03:06:03.124157 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 03:06:03.124309 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 03:06:03.124324 kernel: PCI: CLS 0 bytes, default 64 Jan 20 03:06:03.124337 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:06:03.124348 kernel: Initialise system trusted keyrings Jan 20 03:06:03.124363 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 03:06:03.124375 kernel: Key type asymmetric registered Jan 20 03:06:03.124386 kernel: Asymmetric key parser 'x509' registered Jan 20 03:06:03.124397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 03:06:03.124408 kernel: io scheduler mq-deadline registered Jan 20 03:06:03.124419 kernel: io scheduler kyber registered Jan 20 03:06:03.124430 kernel: io scheduler bfq registered Jan 20 03:06:03.124442 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 03:06:03.124454 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 03:06:03.124469 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 03:06:03.124527 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 03:06:03.124539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 03:06:03.124551 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 03:06:03.124562 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 03:06:03.124573 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 03:06:03.124584 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 03:06:03.124761 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 03:06:03.124783 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 03:06:03.125012 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 03:06:03.125172 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T03:06:02 UTC (1768878362) Jan 20 03:06:03.125326 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 03:06:03.125341 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 03:06:03.125353 kernel: NET: Registered PF_INET6 protocol family Jan 20 03:06:03.125364 kernel: Segment Routing with IPv6 Jan 20 03:06:03.125376 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 03:06:03.125387 kernel: NET: Registered PF_PACKET protocol family Jan 20 03:06:03.125402 kernel: Key type dns_resolver registered Jan 20 03:06:03.125413 kernel: IPI shorthand broadcast: enabled Jan 20 03:06:03.125424 kernel: sched_clock: Marking stable (3406022713, 712339067)->(4374582142, -256220362) Jan 20 03:06:03.125435 kernel: registered taskstats version 1 Jan 20 03:06:03.125446 kernel: Loading compiled-in X.509 certificates Jan 20 03:06:03.125458 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 03:06:03.125469 kernel: Demotion targets for Node 0: null Jan 20 03:06:03.125522 kernel: Key type .fscrypt registered Jan 20 03:06:03.125533 kernel: Key type fscrypt-provisioning registered Jan 20 03:06:03.125548 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 03:06:03.125560 kernel: ima: Allocated hash algorithm: sha1 Jan 20 03:06:03.125571 kernel: ima: No architecture policies found Jan 20 03:06:03.125582 kernel: clk: Disabling unused clocks Jan 20 03:06:03.125593 kernel: Warning: unable to open an initial console. Jan 20 03:06:03.125605 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 03:06:03.125616 kernel: Write protecting the kernel read-only data: 40960k Jan 20 03:06:03.125627 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 03:06:03.125641 kernel: Run /init as init process Jan 20 03:06:03.125652 kernel: with arguments: Jan 20 03:06:03.125664 kernel: /init Jan 20 03:06:03.125674 kernel: with environment: Jan 20 03:06:03.125686 kernel: HOME=/ Jan 20 03:06:03.125697 kernel: TERM=linux Jan 20 03:06:03.125710 systemd[1]: Successfully made /usr/ read-only. Jan 20 03:06:03.125725 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:06:03.125741 systemd[1]: Detected virtualization kvm. Jan 20 03:06:03.125753 systemd[1]: Detected architecture x86-64. Jan 20 03:06:03.125764 systemd[1]: Running in initrd. Jan 20 03:06:03.125776 systemd[1]: No hostname configured, using default hostname. Jan 20 03:06:03.125789 systemd[1]: Hostname set to . Jan 20 03:06:03.125800 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:06:03.125812 systemd[1]: Queued start job for default target initrd.target. Jan 20 03:06:03.125824 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:06:03.125851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:06:03.125867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 03:06:03.125943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:06:03.125957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 03:06:03.125971 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 03:06:03.125988 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 03:06:03.126001 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 03:06:03.126013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:06:03.126025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:06:03.126037 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:06:03.126050 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:06:03.126062 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:06:03.126075 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:06:03.126089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:06:03.126102 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:06:03.126114 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 03:06:03.126127 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 03:06:03.126139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:06:03.126151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:06:03.126164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:06:03.126176 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:06:03.126188 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 03:06:03.126204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:06:03.126217 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 03:06:03.126229 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 03:06:03.126242 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 03:06:03.126254 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:06:03.126267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:06:03.126279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:06:03.126292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 03:06:03.126310 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:06:03.126355 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 03:06:03.126391 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 03:06:03.126404 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:06:03.126417 systemd-journald[203]: Journal started Jan 20 03:06:03.126447 systemd-journald[203]: Runtime Journal (/run/log/journal/b6b3b7eca09149c2b1af8557eb05b708) is 6M, max 48.3M, 42.2M free. Jan 20 03:06:03.114210 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 03:06:03.249120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 03:06:03.249148 kernel: Bridge firewalling registered Jan 20 03:06:03.148415 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 03:06:03.260591 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:06:03.263111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:06:03.268778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:03.277204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:06:03.289530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 03:06:03.295976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:06:03.324100 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:06:03.325181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:06:03.349781 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 03:06:03.352603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:06:03.360621 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:06:03.362966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:06:03.370329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:06:03.375936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:06:03.388288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 03:06:03.432532 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:06:03.434620 systemd-resolved[239]: Positive Trust Anchors: Jan 20 03:06:03.434632 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:06:03.434657 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:06:03.437973 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 20 03:06:03.439385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:06:03.448435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:06:03.613952 kernel: SCSI subsystem initialized Jan 20 03:06:03.625944 kernel: Loading iSCSI transport class v2.0-870. Jan 20 03:06:03.639968 kernel: iscsi: registered transport (tcp) Jan 20 03:06:03.668759 kernel: iscsi: registered transport (qla4xxx) Jan 20 03:06:03.668866 kernel: QLogic iSCSI HBA Driver Jan 20 03:06:03.695997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:06:03.746239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:06:03.753394 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:06:03.813080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 03:06:03.822220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 03:06:03.902943 kernel: raid6: avx2x4 gen() 31935 MB/s Jan 20 03:06:03.920950 kernel: raid6: avx2x2 gen() 30128 MB/s Jan 20 03:06:03.940140 kernel: raid6: avx2x1 gen() 21692 MB/s Jan 20 03:06:03.940242 kernel: raid6: using algorithm avx2x4 gen() 31935 MB/s Jan 20 03:06:03.960127 kernel: raid6: .... xor() 4462 MB/s, rmw enabled Jan 20 03:06:03.960230 kernel: raid6: using avx2x2 recovery algorithm Jan 20 03:06:03.980967 kernel: xor: automatically using best checksumming function avx Jan 20 03:06:04.147992 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 03:06:04.158395 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:06:04.168245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:06:04.214336 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 20 03:06:04.220458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:06:04.221551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 03:06:04.269102 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Jan 20 03:06:04.312437 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:06:04.319025 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:06:04.409775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:06:04.422044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 03:06:04.461931 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 03:06:04.471602 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 03:06:04.480985 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 03:06:04.481040 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 03:06:04.495825 kernel: GPT:9289727 != 19775487 Jan 20 03:06:04.495971 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 03:06:04.496045 kernel: GPT:9289727 != 19775487 Jan 20 03:06:04.496060 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 03:06:04.496076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:06:04.509061 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 03:06:04.512985 kernel: libata version 3.00 loaded. Jan 20 03:06:04.521241 kernel: AES CTR mode by8 optimization enabled Jan 20 03:06:04.521295 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 03:06:04.521596 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 03:06:04.534064 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 03:06:04.534369 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 03:06:04.534638 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 03:06:04.542587 kernel: scsi host0: ahci Jan 20 03:06:04.545960 kernel: scsi host1: ahci Jan 20 03:06:04.549049 kernel: scsi host2: ahci Jan 20 03:06:04.553660 kernel: scsi host3: ahci Jan 20 03:06:04.554166 kernel: scsi host4: ahci Jan 20 03:06:04.560168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:06:04.590009 kernel: scsi host5: ahci Jan 20 03:06:04.590226 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 20 03:06:04.590240 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 20 03:06:04.590250 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 20 03:06:04.590264 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 20 03:06:04.590273 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 20 03:06:04.590283 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 20 03:06:04.560335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:04.602081 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:06:04.607709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:06:04.615760 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:06:04.627489 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 03:06:04.641147 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 03:06:04.659363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:06:04.677911 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 03:06:04.798984 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 03:06:04.809746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:04.821250 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 03:06:04.848433 disk-uuid[622]: Primary Header is updated. Jan 20 03:06:04.848433 disk-uuid[622]: Secondary Entries is updated. Jan 20 03:06:04.848433 disk-uuid[622]: Secondary Header is updated. Jan 20 03:06:04.860342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:06:04.862968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:06:04.897993 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 03:06:04.898063 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 03:06:04.902215 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 03:06:04.904927 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 03:06:04.916604 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 03:06:04.916657 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 03:06:04.924945 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:06:04.924976 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 03:06:04.924994 kernel: ata3.00: applying bridge limits Jan 20 03:06:04.927938 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:06:04.927964 kernel: ata3.00: configured for UDMA/100 Jan 20 03:06:04.939045 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 03:06:05.004560 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 03:06:05.005001 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 03:06:05.019944 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 03:06:05.429432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 03:06:05.434084 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:06:05.445061 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:06:05.445201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:06:05.456817 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 03:06:05.498539 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:06:05.864020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:06:05.864315 disk-uuid[623]: The operation has completed successfully. Jan 20 03:06:05.901988 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 03:06:05.902138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 03:06:05.941445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 03:06:05.979669 sh[651]: Success Jan 20 03:06:06.006692 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 03:06:06.006760 kernel: device-mapper: uevent: version 1.0.3 Jan 20 03:06:06.007011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 03:06:06.025973 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 03:06:06.071367 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 03:06:06.077759 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 03:06:06.096064 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 03:06:06.110800 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (663) Jan 20 03:06:06.118645 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 03:06:06.118699 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:06.134041 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 03:06:06.134120 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 03:06:06.136000 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 03:06:06.140044 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:06:06.146004 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 03:06:06.147312 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 03:06:06.173855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 03:06:06.201968 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (692) Jan 20 03:06:06.209060 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:06.209100 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:06.216802 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:06.216926 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:06.226008 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:06.228617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 03:06:06.230260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 03:06:06.334126 ignition[745]: Ignition 2.22.0 Jan 20 03:06:06.334149 ignition[745]: Stage: fetch-offline Jan 20 03:06:06.334202 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:06.334218 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:06.334329 ignition[745]: parsed url from cmdline: "" Jan 20 03:06:06.334335 ignition[745]: no config URL provided Jan 20 03:06:06.334343 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 03:06:06.334359 ignition[745]: no config at "/usr/lib/ignition/user.ign" Jan 20 03:06:06.356278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:06:06.334389 ignition[745]: op(1): [started] loading QEMU firmware config module Jan 20 03:06:06.334398 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 03:06:06.344475 ignition[745]: op(1): [finished] loading QEMU firmware config module Jan 20 03:06:06.379998 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:06:06.431798 systemd-networkd[840]: lo: Link UP Jan 20 03:06:06.431828 systemd-networkd[840]: lo: Gained carrier Jan 20 03:06:06.433504 systemd-networkd[840]: Enumeration completed Jan 20 03:06:06.434002 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:06:06.435255 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:06.435263 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:06:06.436726 systemd-networkd[840]: eth0: Link UP Jan 20 03:06:06.437482 systemd-networkd[840]: eth0: Gained carrier Jan 20 03:06:06.437496 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:06.445314 systemd[1]: Reached target network.target - Network. Jan 20 03:06:06.478312 systemd-networkd[840]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:06:06.577017 ignition[745]: parsing config with SHA512: 4b4100683ff0ff539a145984b956d723a96b74b97385e7905bfb706b14483aab1b6c63d41f74c9ad7f9295821f0c85ae9ff10f99a7a8ca256cb7386810ab97f3 Jan 20 03:06:06.584722 unknown[745]: fetched base config from "system" Jan 20 03:06:06.585326 unknown[745]: fetched user config from "qemu" Jan 20 03:06:06.585835 ignition[745]: fetch-offline: fetch-offline passed Jan 20 03:06:06.585965 ignition[745]: Ignition finished successfully Jan 20 03:06:06.594039 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.6 Jan 20 03:06:06.594047 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jan 20 03:06:06.611155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:06:06.611828 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 03:06:06.612947 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 03:06:06.673343 ignition[846]: Ignition 2.22.0 Jan 20 03:06:06.673384 ignition[846]: Stage: kargs Jan 20 03:06:06.673620 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:06.673638 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:06.674718 ignition[846]: kargs: kargs passed Jan 20 03:06:06.674769 ignition[846]: Ignition finished successfully Jan 20 03:06:06.689272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 03:06:06.701176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 03:06:06.751984 ignition[854]: Ignition 2.22.0 Jan 20 03:06:06.752014 ignition[854]: Stage: disks Jan 20 03:06:06.752162 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:06.752172 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:06.752872 ignition[854]: disks: disks passed Jan 20 03:06:06.752974 ignition[854]: Ignition finished successfully Jan 20 03:06:06.766122 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 03:06:06.772159 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 03:06:06.775547 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 03:06:06.786330 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:06:06.786428 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:06:06.797153 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:06:06.805029 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 03:06:06.855790 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 03:06:06.864774 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 03:06:06.872179 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 03:06:07.029960 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 03:06:07.030723 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 03:06:07.031624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 03:06:07.034051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:06:07.035748 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 03:06:07.036970 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 03:06:07.037023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 03:06:07.037053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:06:07.082750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Jan 20 03:06:07.082790 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:07.082810 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:07.064566 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 03:06:07.092031 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:07.092063 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:07.091765 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 03:06:07.097961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:06:07.148591 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 03:06:07.158277 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Jan 20 03:06:07.168652 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 03:06:07.178682 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 03:06:07.310562 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 03:06:07.316475 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 03:06:07.332270 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 03:06:07.341610 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 03:06:07.346502 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:07.365860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 03:06:07.381369 ignition[986]: INFO : Ignition 2.22.0 Jan 20 03:06:07.381369 ignition[986]: INFO : Stage: mount Jan 20 03:06:07.390851 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:07.390851 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:07.390851 ignition[986]: INFO : mount: mount passed Jan 20 03:06:07.390851 ignition[986]: INFO : Ignition finished successfully Jan 20 03:06:07.384054 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 03:06:07.386768 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 03:06:07.419015 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:06:07.446950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Jan 20 03:06:07.453010 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:07.453041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:07.460390 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:07.460429 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:07.462581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:06:07.510598 ignition[1016]: INFO : Ignition 2.22.0 Jan 20 03:06:07.510598 ignition[1016]: INFO : Stage: files Jan 20 03:06:07.515184 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:07.515184 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:07.515184 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Jan 20 03:06:07.515184 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 03:06:07.515184 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 03:06:07.533648 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 03:06:07.533648 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 03:06:07.533648 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 03:06:07.533648 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:06:07.533648 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 03:06:07.519561 unknown[1016]: wrote ssh authorized keys file for user: core Jan 20 03:06:07.574071 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 03:06:07.577169 systemd-networkd[840]: eth0: Gained IPv6LL Jan 20 03:06:07.780660 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:06:07.785795 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 03:06:07.785795 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 03:06:07.785795 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:06:07.785795 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:06:07.806060 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 03:06:07.843129 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 03:06:07.843129 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 03:06:07.843129 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 03:06:08.024671 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 03:06:08.567943 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 03:06:08.574578 ignition[1016]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:06:08.634170 ignition[1016]: INFO : files: files passed Jan 20 03:06:08.634170 ignition[1016]: INFO : Ignition finished successfully Jan 20 03:06:08.608415 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 03:06:08.621370 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 03:06:08.655298 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 03:06:08.661467 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 03:06:08.697919 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 03:06:08.661680 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 03:06:08.704145 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:08.704145 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:08.677963 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:06:08.715525 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:08.683613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 03:06:08.690481 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 03:06:08.785331 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 03:06:08.785532 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 03:06:08.788773 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 03:06:08.797731 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 03:06:08.803031 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 03:06:08.805866 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 03:06:08.848059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:06:08.850106 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 03:06:08.893577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:06:08.894013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:06:08.901601 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 03:06:08.909115 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 03:06:08.909267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:06:08.922035 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 03:06:08.929069 systemd[1]: Stopped target basic.target - Basic System. Jan 20 03:06:08.932530 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 03:06:08.938297 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:06:08.945251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 03:06:08.952753 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:06:08.963471 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 03:06:08.972813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:06:08.973144 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 03:06:08.984454 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 03:06:08.984755 systemd[1]: Stopped target swap.target - Swaps. Jan 20 03:06:08.993994 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 03:06:08.994225 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:06:09.002453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:06:09.002781 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:06:09.008465 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 03:06:09.018473 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:06:09.022777 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 03:06:09.023090 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 03:06:09.028388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 03:06:09.028622 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:06:09.041988 systemd[1]: Stopped target paths.target - Path Units. Jan 20 03:06:09.047477 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 03:06:09.050023 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:06:09.052680 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 03:06:09.061587 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 03:06:09.063976 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 03:06:09.064133 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:06:09.068302 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 03:06:09.068404 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:06:09.072789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 03:06:09.073016 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:06:09.080971 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 03:06:09.081154 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 03:06:09.084239 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 03:06:09.096776 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 03:06:09.103260 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 03:06:09.103442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:06:09.111023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 03:06:09.113624 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:06:09.129521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 03:06:09.129723 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 03:06:09.144978 ignition[1072]: INFO : Ignition 2.22.0 Jan 20 03:06:09.144978 ignition[1072]: INFO : Stage: umount Jan 20 03:06:09.144978 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:09.144978 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:09.144978 ignition[1072]: INFO : umount: umount passed Jan 20 03:06:09.144978 ignition[1072]: INFO : Ignition finished successfully Jan 20 03:06:09.135093 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 03:06:09.135216 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 03:06:09.140379 systemd[1]: Stopped target network.target - Network. Jan 20 03:06:09.144869 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 03:06:09.144972 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 03:06:09.147708 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 03:06:09.147774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 03:06:09.154214 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 03:06:09.154272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 03:06:09.159938 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 03:06:09.159991 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 03:06:09.166061 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 03:06:09.171796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 03:06:09.177645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 03:06:09.191785 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 03:06:09.191982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 03:06:09.203491 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 03:06:09.203826 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 03:06:09.204170 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 03:06:09.215072 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 03:06:09.217179 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 03:06:09.219678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 03:06:09.219754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:06:09.238918 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 03:06:09.245258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 03:06:09.245340 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:06:09.245586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:06:09.245638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:06:09.256509 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 03:06:09.256603 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 03:06:09.258830 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 03:06:09.258927 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:06:09.274348 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:06:09.283496 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 03:06:09.283631 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:06:09.284219 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 03:06:09.284380 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 03:06:09.295226 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 03:06:09.295305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 03:06:09.316022 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 03:06:09.331191 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:06:09.341936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 03:06:09.342037 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 03:06:09.346247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 03:06:09.346306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:06:09.353865 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 03:06:09.354020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:06:09.371484 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 03:06:09.371662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 03:06:09.385506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 03:06:09.385688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:06:09.401263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 03:06:09.404613 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 03:06:09.404721 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:06:09.422852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 03:06:09.423007 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:06:09.432525 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 03:06:09.432660 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:06:09.443174 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 03:06:09.443247 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:06:09.446470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:06:09.446536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:09.464239 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 03:06:09.464306 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 20 03:06:09.464350 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 03:06:09.464418 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:06:09.464975 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 03:06:09.465106 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 03:06:09.485229 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 03:06:09.489180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 03:06:09.497445 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 03:06:09.505539 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 03:06:09.561752 systemd[1]: Switching root. Jan 20 03:06:09.592738 systemd-journald[203]: Journal stopped Jan 20 03:06:10.995635 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 03:06:10.995718 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 03:06:10.995743 kernel: SELinux: policy capability open_perms=1 Jan 20 03:06:10.995755 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 03:06:10.995766 kernel: SELinux: policy capability always_check_network=0 Jan 20 03:06:10.995780 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 03:06:10.995795 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 03:06:10.995805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 03:06:10.995815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 03:06:10.995825 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 03:06:10.995845 kernel: audit: type=1403 audit(1768878369.836:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 03:06:10.995865 systemd[1]: Successfully loaded SELinux policy in 81.854ms. Jan 20 03:06:10.995962 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.945ms. Jan 20 03:06:10.995984 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:06:10.996004 systemd[1]: Detected virtualization kvm. Jan 20 03:06:10.996018 systemd[1]: Detected architecture x86-64. Jan 20 03:06:10.996033 systemd[1]: Detected first boot. Jan 20 03:06:10.996049 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:06:10.996071 zram_generator::config[1118]: No configuration found. Jan 20 03:06:10.996089 kernel: Guest personality initialized and is inactive Jan 20 03:06:10.996103 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 03:06:10.996119 kernel: Initialized host personality Jan 20 03:06:10.996135 kernel: NET: Registered PF_VSOCK protocol family Jan 20 03:06:10.996147 systemd[1]: Populated /etc with preset unit settings. Jan 20 03:06:10.996159 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 03:06:10.996171 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 03:06:10.996182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 03:06:10.996196 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 03:06:10.996212 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 03:06:10.996223 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 03:06:10.996234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 03:06:10.996245 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 03:06:10.996255 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 03:06:10.996266 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 03:06:10.996280 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 03:06:10.996293 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 03:06:10.996304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:06:10.996315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:06:10.996325 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 03:06:10.996336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 03:06:10.996347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 03:06:10.996358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:06:10.996369 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 03:06:10.996382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:06:10.996393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:06:10.996405 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 03:06:10.996415 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 03:06:10.996430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 03:06:10.996450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 03:06:10.996470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:06:10.996486 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:06:10.996501 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:06:10.996523 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:06:10.996541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 03:06:10.996555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 03:06:10.996617 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 03:06:10.996634 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:06:10.996649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:06:10.996666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:06:10.996684 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 03:06:10.996703 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 03:06:10.996718 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 03:06:10.996729 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 03:06:10.996740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:10.996751 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 03:06:10.996762 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 03:06:10.996773 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 03:06:10.996784 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 03:06:10.996795 systemd[1]: Reached target machines.target - Containers. Jan 20 03:06:10.996806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 03:06:10.996843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:06:10.996854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:06:10.996865 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 03:06:10.998362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:06:10.998396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:06:10.998409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:06:10.998466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 03:06:10.998504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:06:10.998531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 03:06:10.998547 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 03:06:10.998625 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 03:06:10.998667 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 03:06:10.998717 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 03:06:10.998742 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:06:10.998777 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:06:10.998801 kernel: fuse: init (API version 7.41) Jan 20 03:06:10.998848 kernel: loop: module loaded Jan 20 03:06:10.998872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:06:10.998986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:06:10.999021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 03:06:10.999120 systemd-journald[1187]: Collecting audit messages is disabled. Jan 20 03:06:10.999157 systemd-journald[1187]: Journal started Jan 20 03:06:10.999191 systemd-journald[1187]: Runtime Journal (/run/log/journal/b6b3b7eca09149c2b1af8557eb05b708) is 6M, max 48.3M, 42.2M free. Jan 20 03:06:11.009660 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 03:06:11.009699 kernel: ACPI: bus type drm_connector registered Jan 20 03:06:10.513611 systemd[1]: Queued start job for default target multi-user.target. Jan 20 03:06:10.537330 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 03:06:10.538025 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 03:06:11.013973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:06:11.020821 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 03:06:11.020940 systemd[1]: Stopped verity-setup.service. Jan 20 03:06:11.028977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:11.033969 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:06:11.038376 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 03:06:11.042396 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 03:06:11.046193 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 03:06:11.049515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 03:06:11.053399 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 03:06:11.057483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 03:06:11.061314 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 03:06:11.065726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:06:11.070324 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 03:06:11.070708 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 03:06:11.075049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:06:11.075304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:06:11.079036 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:06:11.079286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:06:11.082432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:06:11.082804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:06:11.086450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 03:06:11.086814 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 03:06:11.090083 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:06:11.090397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:06:11.093815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:06:11.097407 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:06:11.101409 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 03:06:11.105143 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 03:06:11.122335 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:06:11.126978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 03:06:11.131266 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 03:06:11.134432 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 03:06:11.134479 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:06:11.136097 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 03:06:11.154261 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 03:06:11.157523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:06:11.159403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 03:06:11.164249 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 03:06:11.167346 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:06:11.169220 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 03:06:11.172410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:06:11.174535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:06:11.181076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 03:06:11.186197 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:06:11.189343 systemd-journald[1187]: Time spent on flushing to /var/log/journal/b6b3b7eca09149c2b1af8557eb05b708 is 19.176ms for 979 entries. Jan 20 03:06:11.189343 systemd-journald[1187]: System Journal (/var/log/journal/b6b3b7eca09149c2b1af8557eb05b708) is 8M, max 195.6M, 187.6M free. Jan 20 03:06:11.232198 systemd-journald[1187]: Received client request to flush runtime journal. Jan 20 03:06:11.232262 kernel: loop0: detected capacity change from 0 to 128560 Jan 20 03:06:11.199138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:06:11.203752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 03:06:11.207841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 03:06:11.213764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 03:06:11.221795 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 03:06:11.229300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 03:06:11.238859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 03:06:11.245621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:06:11.261978 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 03:06:11.266448 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 03:06:11.267979 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 20 03:06:11.267999 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 20 03:06:11.273444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:06:11.282207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 03:06:11.290944 kernel: loop1: detected capacity change from 0 to 219144 Jan 20 03:06:11.329986 kernel: loop2: detected capacity change from 0 to 110984 Jan 20 03:06:11.338755 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 03:06:11.343814 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:06:11.376379 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 20 03:06:11.376427 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 20 03:06:11.377958 kernel: loop3: detected capacity change from 0 to 128560 Jan 20 03:06:11.383370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:06:11.400997 kernel: loop4: detected capacity change from 0 to 219144 Jan 20 03:06:11.419936 kernel: loop5: detected capacity change from 0 to 110984 Jan 20 03:06:11.434834 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 03:06:11.435673 (sd-merge)[1263]: Merged extensions into '/usr'. Jan 20 03:06:11.441284 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 03:06:11.441440 systemd[1]: Reloading... Jan 20 03:06:11.518996 zram_generator::config[1290]: No configuration found. Jan 20 03:06:11.634521 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 03:06:11.733494 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 03:06:11.734075 systemd[1]: Reloading finished in 291 ms. Jan 20 03:06:11.765819 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 03:06:11.769354 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 03:06:11.772967 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 03:06:11.804541 systemd[1]: Starting ensure-sysext.service... Jan 20 03:06:11.807782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:06:11.813853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:06:11.840980 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Jan 20 03:06:11.841002 systemd[1]: Reloading... Jan 20 03:06:11.845488 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 03:06:11.846088 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 03:06:11.846543 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 03:06:11.847043 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 03:06:11.848508 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 03:06:11.849093 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 20 03:06:11.849189 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 20 03:06:11.854810 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:06:11.854846 systemd-tmpfiles[1329]: Skipping /boot Jan 20 03:06:11.862185 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 20 03:06:11.871766 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:06:11.871806 systemd-tmpfiles[1329]: Skipping /boot Jan 20 03:06:11.914010 zram_generator::config[1357]: No configuration found. Jan 20 03:06:12.066978 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 03:06:12.090936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 03:06:12.098977 kernel: ACPI: button: Power Button [PWRF] Jan 20 03:06:12.116941 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 03:06:12.121368 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 03:06:12.191865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:06:12.196158 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 03:06:12.196222 systemd[1]: Reloading finished in 354 ms. Jan 20 03:06:12.206375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:06:12.218836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:06:12.290154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:12.293385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:06:12.299034 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 03:06:12.302524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:06:12.306232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:06:12.318355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:06:12.323334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:06:12.327762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:06:12.331441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:06:12.332991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 03:06:12.336531 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:06:12.351270 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 03:06:12.359287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:06:12.368761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:06:12.374663 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 03:06:12.381259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:06:12.381402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:12.385358 systemd[1]: Finished ensure-sysext.service. Jan 20 03:06:12.387251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:06:12.395573 kernel: kvm_amd: TSC scaling supported Jan 20 03:06:12.395692 kernel: kvm_amd: Nested Virtualization enabled Jan 20 03:06:12.395730 kernel: kvm_amd: Nested Paging enabled Jan 20 03:06:12.399065 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 03:06:12.399103 kernel: kvm_amd: PMU virtualization is disabled Jan 20 03:06:12.400489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:06:12.401801 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:06:12.402227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:06:12.403560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:06:12.404249 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:06:12.411211 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:06:12.411518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:06:12.421460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 03:06:12.433445 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 03:06:12.461731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:06:12.462238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:06:12.470142 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 03:06:12.473810 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 03:06:12.478653 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 03:06:12.480623 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 03:06:12.481218 augenrules[1490]: No rules Jan 20 03:06:12.481504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 03:06:12.482240 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:06:12.482518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:06:12.491240 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 03:06:12.513976 kernel: EDAC MC: Ver: 3.0.0 Jan 20 03:06:12.517241 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 03:06:12.550129 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 03:06:12.635024 systemd-resolved[1462]: Positive Trust Anchors: Jan 20 03:06:12.635053 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:06:12.635079 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:06:12.637478 systemd-networkd[1460]: lo: Link UP Jan 20 03:06:12.637823 systemd-networkd[1460]: lo: Gained carrier Jan 20 03:06:12.639003 systemd-resolved[1462]: Defaulting to hostname 'linux'. Jan 20 03:06:12.640261 systemd-networkd[1460]: Enumeration completed Jan 20 03:06:12.641300 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:12.641404 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:06:12.642264 systemd-networkd[1460]: eth0: Link UP Jan 20 03:06:12.642789 systemd-networkd[1460]: eth0: Gained carrier Jan 20 03:06:12.642957 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:12.665948 systemd-networkd[1460]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:06:12.666692 systemd-timesyncd[1486]: Network configuration changed, trying to establish connection. Jan 20 03:06:12.668223 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 03:06:12.668296 systemd-timesyncd[1486]: Initial clock synchronization to Tue 2026-01-20 03:06:12.730215 UTC. Jan 20 03:06:12.673021 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:06:12.678323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:06:12.683662 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 03:06:12.690165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:12.696966 systemd[1]: Reached target network.target - Network. Jan 20 03:06:12.701369 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:06:12.707261 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:06:12.712314 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 03:06:12.718193 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 03:06:12.724161 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 03:06:12.729510 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 03:06:12.735351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 03:06:12.735434 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:06:12.740112 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 03:06:12.745178 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 03:06:12.750523 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 03:06:12.756426 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:06:12.761990 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 03:06:12.769781 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 03:06:12.777563 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 03:06:12.783743 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 03:06:12.789788 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 03:06:12.798842 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 03:06:12.803828 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 03:06:12.811029 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 03:06:12.818198 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 03:06:12.824442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 03:06:12.830341 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:06:12.834983 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:06:12.839080 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:06:12.839115 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:06:12.840707 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 03:06:12.846807 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 03:06:12.853643 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 03:06:12.865206 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 03:06:12.870757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 03:06:12.874733 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 03:06:12.877155 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 03:06:12.882662 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 03:06:12.892017 jq[1520]: false Jan 20 03:06:12.892305 extend-filesystems[1521]: Found /dev/vda6 Jan 20 03:06:12.888343 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 03:06:12.905403 extend-filesystems[1521]: Found /dev/vda9 Jan 20 03:06:12.905403 extend-filesystems[1521]: Checking size of /dev/vda9 Jan 20 03:06:12.898042 oslogin_cache_refresh[1522]: Refreshing passwd entry cache Jan 20 03:06:12.890005 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 03:06:12.917538 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing passwd entry cache Jan 20 03:06:12.917538 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting users, quitting Jan 20 03:06:12.917538 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:06:12.917538 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing group entry cache Jan 20 03:06:12.916746 oslogin_cache_refresh[1522]: Failure getting users, quitting Jan 20 03:06:12.902957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 03:06:12.916774 oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:06:12.916848 oslogin_cache_refresh[1522]: Refreshing group entry cache Jan 20 03:06:12.924404 extend-filesystems[1521]: Resized partition /dev/vda9 Jan 20 03:06:12.926156 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 03:06:12.929932 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 03:06:12.931340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 03:06:12.932233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 03:06:12.933293 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 03:06:12.935389 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting groups, quitting Jan 20 03:06:12.935389 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:06:12.933785 oslogin_cache_refresh[1522]: Failure getting groups, quitting Jan 20 03:06:12.933803 oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:06:12.939986 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 03:06:12.939814 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 03:06:12.948464 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 03:06:12.956429 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 03:06:12.962070 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 03:06:12.962672 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 03:06:12.963396 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 03:06:12.964019 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 03:06:12.969773 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 03:06:12.971326 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 03:06:12.977576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 03:06:12.978136 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 03:06:12.988142 update_engine[1544]: I20260120 03:06:12.988009 1544 main.cc:92] Flatcar Update Engine starting Jan 20 03:06:12.997499 jq[1546]: true Jan 20 03:06:13.037254 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 03:06:13.017430 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 03:06:13.041569 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 03:06:13.041569 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 03:06:13.041569 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 03:06:13.053269 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Jan 20 03:06:13.049256 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 03:06:13.049587 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 03:06:13.055967 jq[1559]: true Jan 20 03:06:13.064979 tar[1550]: linux-amd64/LICENSE Jan 20 03:06:13.064979 tar[1550]: linux-amd64/helm Jan 20 03:06:13.067973 dbus-daemon[1518]: [system] SELinux support is enabled Jan 20 03:06:13.068827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 03:06:13.072201 update_engine[1544]: I20260120 03:06:13.072108 1544 update_check_scheduler.cc:74] Next update check in 11m4s Jan 20 03:06:13.076781 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 03:06:13.076830 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 03:06:13.080650 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 03:06:13.080688 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 03:06:13.081010 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 03:06:13.081040 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 03:06:13.081362 systemd-logind[1542]: New seat seat0. Jan 20 03:06:13.084272 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 03:06:13.090789 systemd[1]: Started update-engine.service - Update Engine. Jan 20 03:06:13.097144 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 03:06:13.142405 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Jan 20 03:06:13.144201 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 03:06:13.149863 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 03:06:13.154293 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 03:06:13.235438 containerd[1558]: time="2026-01-20T03:06:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 03:06:13.238985 containerd[1558]: time="2026-01-20T03:06:13.237171458Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 03:06:13.254815 containerd[1558]: time="2026-01-20T03:06:13.254751953Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.157µs" Jan 20 03:06:13.255419 containerd[1558]: time="2026-01-20T03:06:13.255382535Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 03:06:13.255647 containerd[1558]: time="2026-01-20T03:06:13.255618650Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 03:06:13.256031 containerd[1558]: time="2026-01-20T03:06:13.256001213Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 03:06:13.257044 containerd[1558]: time="2026-01-20T03:06:13.256985964Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 03:06:13.257090 containerd[1558]: time="2026-01-20T03:06:13.257058274Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257208 containerd[1558]: time="2026-01-20T03:06:13.257154362Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257208 containerd[1558]: time="2026-01-20T03:06:13.257196404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257666 containerd[1558]: time="2026-01-20T03:06:13.257612457Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257666 containerd[1558]: time="2026-01-20T03:06:13.257657718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257729 containerd[1558]: time="2026-01-20T03:06:13.257677175Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257729 containerd[1558]: time="2026-01-20T03:06:13.257689099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 03:06:13.257867 containerd[1558]: time="2026-01-20T03:06:13.257799362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 03:06:13.258310 containerd[1558]: time="2026-01-20T03:06:13.258186035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:06:13.258310 containerd[1558]: time="2026-01-20T03:06:13.258250632Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:06:13.258310 containerd[1558]: time="2026-01-20T03:06:13.258264393Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 03:06:13.258310 containerd[1558]: time="2026-01-20T03:06:13.258304507Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 03:06:13.258725 containerd[1558]: time="2026-01-20T03:06:13.258694829Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 03:06:13.258989 containerd[1558]: time="2026-01-20T03:06:13.258968311Z" level=info msg="metadata content store policy set" policy=shared Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.265930554Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266013799Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266030106Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266041060Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266053772Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266064080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266086180Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266098933Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266109736Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266118328Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266126628Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 03:06:13.266186 containerd[1558]: time="2026-01-20T03:06:13.266138299Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266693522Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266735270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266764005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266786540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266801150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266816739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266833105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266846543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266859386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 03:06:13.266966 containerd[1558]: time="2026-01-20T03:06:13.266872420Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 03:06:13.267248 containerd[1558]: time="2026-01-20T03:06:13.267230792Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 03:06:13.267374 containerd[1558]: time="2026-01-20T03:06:13.267354837Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 03:06:13.267454 containerd[1558]: time="2026-01-20T03:06:13.267438184Z" level=info msg="Start snapshots syncer" Jan 20 03:06:13.267778 containerd[1558]: time="2026-01-20T03:06:13.267754837Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 03:06:13.268262 containerd[1558]: time="2026-01-20T03:06:13.268211833Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 03:06:13.268484 containerd[1558]: time="2026-01-20T03:06:13.268461547Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 03:06:13.270365 containerd[1558]: time="2026-01-20T03:06:13.270339603Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 03:06:13.270726 containerd[1558]: time="2026-01-20T03:06:13.270703580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 03:06:13.270837 containerd[1558]: time="2026-01-20T03:06:13.270813893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 03:06:13.270970 containerd[1558]: time="2026-01-20T03:06:13.270951902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 03:06:13.271045 containerd[1558]: time="2026-01-20T03:06:13.271026847Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 03:06:13.271132 containerd[1558]: time="2026-01-20T03:06:13.271112828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 03:06:13.271202 containerd[1558]: time="2026-01-20T03:06:13.271185381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 03:06:13.271268 containerd[1558]: time="2026-01-20T03:06:13.271252674Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 03:06:13.271387 containerd[1558]: time="2026-01-20T03:06:13.271366784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 03:06:13.271463 containerd[1558]: time="2026-01-20T03:06:13.271444758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 03:06:13.271531 containerd[1558]: time="2026-01-20T03:06:13.271514777Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 03:06:13.271665 containerd[1558]: time="2026-01-20T03:06:13.271643689Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:06:13.271870 containerd[1558]: time="2026-01-20T03:06:13.271846748Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:06:13.272013 containerd[1558]: time="2026-01-20T03:06:13.271992661Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:06:13.272086 containerd[1558]: time="2026-01-20T03:06:13.272066951Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:06:13.272169 containerd[1558]: time="2026-01-20T03:06:13.272149923Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 03:06:13.272255 containerd[1558]: time="2026-01-20T03:06:13.272236086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 03:06:13.272334 containerd[1558]: time="2026-01-20T03:06:13.272316797Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 03:06:13.272418 containerd[1558]: time="2026-01-20T03:06:13.272401648Z" level=info msg="runtime interface created" Jan 20 03:06:13.272474 containerd[1558]: time="2026-01-20T03:06:13.272460449Z" level=info msg="created NRI interface" Jan 20 03:06:13.272541 containerd[1558]: time="2026-01-20T03:06:13.272523178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 03:06:13.272630 containerd[1558]: time="2026-01-20T03:06:13.272610350Z" level=info msg="Connect containerd service" Jan 20 03:06:13.272763 containerd[1558]: time="2026-01-20T03:06:13.272744976Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 03:06:13.274417 containerd[1558]: time="2026-01-20T03:06:13.274387958Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:06:13.357132 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 03:06:13.365815 containerd[1558]: time="2026-01-20T03:06:13.365779351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 03:06:13.366223 containerd[1558]: time="2026-01-20T03:06:13.366204179Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 03:06:13.366790 containerd[1558]: time="2026-01-20T03:06:13.365961941Z" level=info msg="Start subscribing containerd event" Jan 20 03:06:13.367568 containerd[1558]: time="2026-01-20T03:06:13.367498941Z" level=info msg="Start recovering state" Jan 20 03:06:13.367754 containerd[1558]: time="2026-01-20T03:06:13.367691924Z" level=info msg="Start event monitor" Jan 20 03:06:13.367754 containerd[1558]: time="2026-01-20T03:06:13.367736924Z" level=info msg="Start cni network conf syncer for default" Jan 20 03:06:13.367754 containerd[1558]: time="2026-01-20T03:06:13.367748120Z" level=info msg="Start streaming server" Jan 20 03:06:13.367847 containerd[1558]: time="2026-01-20T03:06:13.367772454Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 03:06:13.367847 containerd[1558]: time="2026-01-20T03:06:13.367783438Z" level=info msg="runtime interface starting up..." Jan 20 03:06:13.367847 containerd[1558]: time="2026-01-20T03:06:13.367792373Z" level=info msg="starting plugins..." Jan 20 03:06:13.367847 containerd[1558]: time="2026-01-20T03:06:13.367817614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 03:06:13.368138 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 03:06:13.372679 containerd[1558]: time="2026-01-20T03:06:13.369050842Z" level=info msg="containerd successfully booted in 0.134222s" Jan 20 03:06:13.394057 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 03:06:13.399834 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 03:06:13.427611 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 03:06:13.428067 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 03:06:13.434326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 03:06:13.440215 tar[1550]: linux-amd64/README.md Jan 20 03:06:13.457837 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 03:06:13.462767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 03:06:13.469461 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 03:06:13.474268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 03:06:13.474607 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 03:06:13.787276 systemd-networkd[1460]: eth0: Gained IPv6LL Jan 20 03:06:13.801082 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 03:06:13.807404 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 03:06:13.814037 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 03:06:13.820552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:13.837463 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 03:06:13.874875 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 03:06:13.886853 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 03:06:13.887360 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 03:06:13.892647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 03:06:14.739452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:14.743677 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 03:06:14.747181 systemd[1]: Startup finished in 3.490s (kernel) + 7.139s (initrd) + 4.987s (userspace) = 15.617s. Jan 20 03:06:14.838442 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:06:15.273640 kubelet[1653]: E0120 03:06:15.273541 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:06:15.277388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:06:15.277724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:06:15.278382 systemd[1]: kubelet.service: Consumed 988ms CPU time, 257M memory peak. Jan 20 03:06:16.673813 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 03:06:16.675267 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:38946.service - OpenSSH per-connection server daemon (10.0.0.1:38946). Jan 20 03:06:16.759130 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 38946 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:16.761337 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:16.768510 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 03:06:16.769720 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 03:06:16.776874 systemd-logind[1542]: New session 1 of user core. Jan 20 03:06:16.792174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 03:06:16.796139 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 03:06:16.814374 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 03:06:16.818471 systemd-logind[1542]: New session c1 of user core. Jan 20 03:06:16.971351 systemd[1671]: Queued start job for default target default.target. Jan 20 03:06:16.992455 systemd[1671]: Created slice app.slice - User Application Slice. Jan 20 03:06:16.992514 systemd[1671]: Reached target paths.target - Paths. Jan 20 03:06:16.992605 systemd[1671]: Reached target timers.target - Timers. Jan 20 03:06:16.994273 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 03:06:17.006537 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 03:06:17.006700 systemd[1671]: Reached target sockets.target - Sockets. Jan 20 03:06:17.006781 systemd[1671]: Reached target basic.target - Basic System. Jan 20 03:06:17.006829 systemd[1671]: Reached target default.target - Main User Target. Jan 20 03:06:17.006920 systemd[1671]: Startup finished in 177ms. Jan 20 03:06:17.006998 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 03:06:17.008701 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 03:06:17.073805 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:38954.service - OpenSSH per-connection server daemon (10.0.0.1:38954). Jan 20 03:06:17.116911 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 38954 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:17.118365 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:17.123831 systemd-logind[1542]: New session 2 of user core. Jan 20 03:06:17.131066 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 03:06:17.187188 sshd[1685]: Connection closed by 10.0.0.1 port 38954 Jan 20 03:06:17.187694 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:17.201289 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:38954.service: Deactivated successfully. Jan 20 03:06:17.204097 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 03:06:17.205416 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Jan 20 03:06:17.209498 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:38964.service - OpenSSH per-connection server daemon (10.0.0.1:38964). Jan 20 03:06:17.210810 systemd-logind[1542]: Removed session 2. Jan 20 03:06:17.278535 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 38964 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:17.280473 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:17.286751 systemd-logind[1542]: New session 3 of user core. Jan 20 03:06:17.302147 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 03:06:17.353270 sshd[1694]: Connection closed by 10.0.0.1 port 38964 Jan 20 03:06:17.353953 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:17.371546 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:38964.service: Deactivated successfully. Jan 20 03:06:17.373514 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 03:06:17.374679 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Jan 20 03:06:17.378338 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:38974.service - OpenSSH per-connection server daemon (10.0.0.1:38974). Jan 20 03:06:17.379178 systemd-logind[1542]: Removed session 3. Jan 20 03:06:17.446370 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 38974 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:17.447867 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:17.453702 systemd-logind[1542]: New session 4 of user core. Jan 20 03:06:17.464062 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 03:06:17.521819 sshd[1703]: Connection closed by 10.0.0.1 port 38974 Jan 20 03:06:17.522218 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:17.541412 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:38974.service: Deactivated successfully. Jan 20 03:06:17.544298 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 03:06:17.545685 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Jan 20 03:06:17.548523 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:38982.service - OpenSSH per-connection server daemon (10.0.0.1:38982). Jan 20 03:06:17.550055 systemd-logind[1542]: Removed session 4. Jan 20 03:06:17.605579 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 38982 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:17.607644 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:17.614126 systemd-logind[1542]: New session 5 of user core. Jan 20 03:06:17.625115 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 03:06:17.690070 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 03:06:17.690612 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:17.715157 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:17.717198 sshd[1712]: Connection closed by 10.0.0.1 port 38982 Jan 20 03:06:17.717719 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:17.727365 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:38982.service: Deactivated successfully. Jan 20 03:06:17.729126 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 03:06:17.729994 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Jan 20 03:06:17.732407 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:38984.service - OpenSSH per-connection server daemon (10.0.0.1:38984). Jan 20 03:06:17.733810 systemd-logind[1542]: Removed session 5. Jan 20 03:06:17.807115 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 38984 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:17.809009 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:17.814494 systemd-logind[1542]: New session 6 of user core. Jan 20 03:06:17.824112 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 03:06:17.882990 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 03:06:17.883352 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:17.892346 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:17.900024 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 03:06:17.900508 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:17.912400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:06:17.967121 augenrules[1746]: No rules Jan 20 03:06:17.968744 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:06:17.969124 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:06:17.970459 sudo[1723]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:17.972454 sshd[1722]: Connection closed by 10.0.0.1 port 38984 Jan 20 03:06:17.972849 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:17.991254 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:38984.service: Deactivated successfully. Jan 20 03:06:17.993050 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 03:06:17.994089 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Jan 20 03:06:17.996223 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:38992.service - OpenSSH per-connection server daemon (10.0.0.1:38992). Jan 20 03:06:17.997137 systemd-logind[1542]: Removed session 6. Jan 20 03:06:18.057425 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 38992 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:18.058859 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:18.064204 systemd-logind[1542]: New session 7 of user core. Jan 20 03:06:18.076101 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 03:06:18.132178 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 03:06:18.132523 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:18.473358 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 03:06:18.495309 (dockerd)[1780]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 03:06:18.752716 dockerd[1780]: time="2026-01-20T03:06:18.752504354Z" level=info msg="Starting up" Jan 20 03:06:18.753922 dockerd[1780]: time="2026-01-20T03:06:18.753835455Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 03:06:18.769724 dockerd[1780]: time="2026-01-20T03:06:18.769652994Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 03:06:18.975649 dockerd[1780]: time="2026-01-20T03:06:18.975581946Z" level=info msg="Loading containers: start." Jan 20 03:06:18.987965 kernel: Initializing XFRM netlink socket Jan 20 03:06:19.332167 systemd-networkd[1460]: docker0: Link UP Jan 20 03:06:19.339722 dockerd[1780]: time="2026-01-20T03:06:19.339642132Z" level=info msg="Loading containers: done." Jan 20 03:06:19.356541 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck960437586-merged.mount: Deactivated successfully. Jan 20 03:06:19.357672 dockerd[1780]: time="2026-01-20T03:06:19.357579141Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 03:06:19.357758 dockerd[1780]: time="2026-01-20T03:06:19.357697602Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 03:06:19.357849 dockerd[1780]: time="2026-01-20T03:06:19.357805779Z" level=info msg="Initializing buildkit" Jan 20 03:06:19.396417 dockerd[1780]: time="2026-01-20T03:06:19.396275747Z" level=info msg="Completed buildkit initialization" Jan 20 03:06:19.407992 dockerd[1780]: time="2026-01-20T03:06:19.407898329Z" level=info msg="Daemon has completed initialization" Jan 20 03:06:19.408208 dockerd[1780]: time="2026-01-20T03:06:19.408123560Z" level=info msg="API listen on /run/docker.sock" Jan 20 03:06:19.408282 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 03:06:20.167198 containerd[1558]: time="2026-01-20T03:06:20.167118283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 03:06:20.657814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3236248371.mount: Deactivated successfully. Jan 20 03:06:21.557932 containerd[1558]: time="2026-01-20T03:06:21.557834658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:21.558479 containerd[1558]: time="2026-01-20T03:06:21.558445969Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 20 03:06:21.559538 containerd[1558]: time="2026-01-20T03:06:21.559463751Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:21.562277 containerd[1558]: time="2026-01-20T03:06:21.562217233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:21.563147 containerd[1558]: time="2026-01-20T03:06:21.563097773Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.39593785s" Jan 20 03:06:21.563147 containerd[1558]: time="2026-01-20T03:06:21.563140094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 03:06:21.563970 containerd[1558]: time="2026-01-20T03:06:21.563866654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 03:06:22.481749 containerd[1558]: time="2026-01-20T03:06:22.481663614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:22.482575 containerd[1558]: time="2026-01-20T03:06:22.482504635Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 20 03:06:22.483804 containerd[1558]: time="2026-01-20T03:06:22.483709131Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:22.486104 containerd[1558]: time="2026-01-20T03:06:22.486045358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:22.486962 containerd[1558]: time="2026-01-20T03:06:22.486848313Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 922.914868ms" Jan 20 03:06:22.487054 containerd[1558]: time="2026-01-20T03:06:22.486963756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 03:06:22.487611 containerd[1558]: time="2026-01-20T03:06:22.487499078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 03:06:23.282315 containerd[1558]: time="2026-01-20T03:06:23.282198558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:23.283249 containerd[1558]: time="2026-01-20T03:06:23.283177577Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 20 03:06:23.284748 containerd[1558]: time="2026-01-20T03:06:23.284693688Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:23.288654 containerd[1558]: time="2026-01-20T03:06:23.288591131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:23.289801 containerd[1558]: time="2026-01-20T03:06:23.289725421Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 802.175109ms" Jan 20 03:06:23.289801 containerd[1558]: time="2026-01-20T03:06:23.289751553Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 03:06:23.291316 containerd[1558]: time="2026-01-20T03:06:23.291226237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 03:06:24.289934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814249986.mount: Deactivated successfully. Jan 20 03:06:24.565868 containerd[1558]: time="2026-01-20T03:06:24.565728553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:24.566956 containerd[1558]: time="2026-01-20T03:06:24.566840836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 20 03:06:24.568272 containerd[1558]: time="2026-01-20T03:06:24.568192013Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:24.570308 containerd[1558]: time="2026-01-20T03:06:24.570224162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:24.570830 containerd[1558]: time="2026-01-20T03:06:24.570771889Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.27947818s" Jan 20 03:06:24.570830 containerd[1558]: time="2026-01-20T03:06:24.570820656Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 03:06:24.571758 containerd[1558]: time="2026-01-20T03:06:24.571709378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 03:06:25.003397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953502197.mount: Deactivated successfully. Jan 20 03:06:25.528411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 03:06:25.531325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:25.759451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:25.776432 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:06:25.832456 kubelet[2136]: E0120 03:06:25.832275 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:06:25.837798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:06:25.838113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:06:25.838931 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.5M memory peak. Jan 20 03:06:26.075543 containerd[1558]: time="2026-01-20T03:06:26.075426944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.076478 containerd[1558]: time="2026-01-20T03:06:26.076394638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 20 03:06:26.078580 containerd[1558]: time="2026-01-20T03:06:26.078501612Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.082704 containerd[1558]: time="2026-01-20T03:06:26.082558357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.084407 containerd[1558]: time="2026-01-20T03:06:26.084304421Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.512547635s" Jan 20 03:06:26.084407 containerd[1558]: time="2026-01-20T03:06:26.084372661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 03:06:26.085217 containerd[1558]: time="2026-01-20T03:06:26.085109691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 03:06:26.699858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958030502.mount: Deactivated successfully. Jan 20 03:06:26.709770 containerd[1558]: time="2026-01-20T03:06:26.709710873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.711048 containerd[1558]: time="2026-01-20T03:06:26.710862900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 20 03:06:26.712811 containerd[1558]: time="2026-01-20T03:06:26.712747486Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.716374 containerd[1558]: time="2026-01-20T03:06:26.716291065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:26.717196 containerd[1558]: time="2026-01-20T03:06:26.717106999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 631.954379ms" Jan 20 03:06:26.717196 containerd[1558]: time="2026-01-20T03:06:26.717154822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 03:06:26.717927 containerd[1558]: time="2026-01-20T03:06:26.717827196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 03:06:27.155658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811080478.mount: Deactivated successfully. Jan 20 03:06:29.021980 containerd[1558]: time="2026-01-20T03:06:29.021863042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:29.023037 containerd[1558]: time="2026-01-20T03:06:29.022963654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 20 03:06:29.025256 containerd[1558]: time="2026-01-20T03:06:29.025142660Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:29.028035 containerd[1558]: time="2026-01-20T03:06:29.027969539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:29.030156 containerd[1558]: time="2026-01-20T03:06:29.030113167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.312233285s" Jan 20 03:06:29.030156 containerd[1558]: time="2026-01-20T03:06:29.030155405Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 03:06:32.561352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:32.561629 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.5M memory peak. Jan 20 03:06:32.569513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:32.604799 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Jan 20 03:06:32.604828 systemd[1]: Reloading... Jan 20 03:06:32.690039 zram_generator::config[2273]: No configuration found. Jan 20 03:06:32.941268 systemd[1]: Reloading finished in 335 ms. Jan 20 03:06:33.026602 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 03:06:33.026728 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 03:06:33.027176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:33.027318 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.1M memory peak. Jan 20 03:06:33.029605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:33.227500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:33.243758 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:06:33.306203 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:06:33.306203 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:33.306617 kubelet[2321]: I0120 03:06:33.306232 2321 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:06:34.160925 kubelet[2321]: I0120 03:06:34.160826 2321 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 03:06:34.160925 kubelet[2321]: I0120 03:06:34.160868 2321 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:06:34.162372 kubelet[2321]: I0120 03:06:34.162320 2321 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 03:06:34.162372 kubelet[2321]: I0120 03:06:34.162354 2321 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:06:34.162619 kubelet[2321]: I0120 03:06:34.162559 2321 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:06:34.168932 kubelet[2321]: E0120 03:06:34.168762 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 03:06:34.170357 kubelet[2321]: I0120 03:06:34.170228 2321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:06:34.173777 kubelet[2321]: I0120 03:06:34.173742 2321 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:06:34.183317 kubelet[2321]: I0120 03:06:34.183262 2321 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 03:06:34.184991 kubelet[2321]: I0120 03:06:34.184855 2321 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:06:34.185213 kubelet[2321]: I0120 03:06:34.184985 2321 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:06:34.185213 kubelet[2321]: I0120 03:06:34.185203 2321 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:06:34.185352 kubelet[2321]: I0120 03:06:34.185217 2321 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 03:06:34.185352 kubelet[2321]: I0120 03:06:34.185346 2321 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 03:06:34.188238 kubelet[2321]: I0120 03:06:34.188173 2321 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:34.188504 kubelet[2321]: I0120 03:06:34.188441 2321 kubelet.go:475] "Attempting to sync node with API server" Jan 20 03:06:34.188504 kubelet[2321]: I0120 03:06:34.188476 2321 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:06:34.188592 kubelet[2321]: I0120 03:06:34.188508 2321 kubelet.go:387] "Adding apiserver pod source" Jan 20 03:06:34.188592 kubelet[2321]: I0120 03:06:34.188545 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:06:34.189420 kubelet[2321]: E0120 03:06:34.189303 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:06:34.189589 kubelet[2321]: E0120 03:06:34.189546 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:06:34.192112 kubelet[2321]: I0120 03:06:34.192068 2321 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:06:34.192659 kubelet[2321]: I0120 03:06:34.192621 2321 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:06:34.192725 kubelet[2321]: I0120 03:06:34.192665 2321 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 03:06:34.192725 kubelet[2321]: W0120 03:06:34.192711 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 03:06:34.196803 kubelet[2321]: I0120 03:06:34.196753 2321 server.go:1262] "Started kubelet" Jan 20 03:06:34.197691 kubelet[2321]: I0120 03:06:34.197401 2321 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:06:34.197691 kubelet[2321]: I0120 03:06:34.197448 2321 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 03:06:34.199627 kubelet[2321]: I0120 03:06:34.199580 2321 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:06:34.200277 kubelet[2321]: I0120 03:06:34.200226 2321 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:06:34.200349 kubelet[2321]: I0120 03:06:34.200337 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:06:34.203630 kubelet[2321]: E0120 03:06:34.201387 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c518710f56817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 03:06:34.196715543 +0000 UTC m=+0.947127031,LastTimestamp:2026-01-20 03:06:34.196715543 +0000 UTC m=+0.947127031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 03:06:34.203986 kubelet[2321]: I0120 03:06:34.203942 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:06:34.205858 kubelet[2321]: E0120 03:06:34.205818 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:34.205858 kubelet[2321]: I0120 03:06:34.205861 2321 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 03:06:34.206764 kubelet[2321]: I0120 03:06:34.206718 2321 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 03:06:34.206814 kubelet[2321]: I0120 03:06:34.206770 2321 reconciler.go:29] "Reconciler: start to sync state" Jan 20 03:06:34.207270 kubelet[2321]: E0120 03:06:34.207237 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 03:06:34.207454 kubelet[2321]: E0120 03:06:34.207386 2321 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:06:34.207698 kubelet[2321]: E0120 03:06:34.207507 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Jan 20 03:06:34.208814 kubelet[2321]: I0120 03:06:34.208761 2321 server.go:310] "Adding debug handlers to kubelet server" Jan 20 03:06:34.210920 kubelet[2321]: I0120 03:06:34.209262 2321 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:06:34.210920 kubelet[2321]: I0120 03:06:34.209385 2321 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:06:34.211749 kubelet[2321]: I0120 03:06:34.211699 2321 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:06:34.226858 kubelet[2321]: I0120 03:06:34.226807 2321 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:06:34.226858 kubelet[2321]: I0120 03:06:34.226837 2321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:06:34.226858 kubelet[2321]: I0120 03:06:34.226852 2321 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:34.229615 kubelet[2321]: I0120 03:06:34.229598 2321 policy_none.go:49] "None policy: Start" Jan 20 03:06:34.229765 kubelet[2321]: I0120 03:06:34.229703 2321 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 03:06:34.229765 kubelet[2321]: I0120 03:06:34.229747 2321 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 03:06:34.231397 kubelet[2321]: I0120 03:06:34.231371 2321 policy_none.go:47] "Start" Jan 20 03:06:34.235020 kubelet[2321]: I0120 03:06:34.234928 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 03:06:34.236757 kubelet[2321]: I0120 03:06:34.236734 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 03:06:34.237932 kubelet[2321]: I0120 03:06:34.237308 2321 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 03:06:34.237932 kubelet[2321]: I0120 03:06:34.237455 2321 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 03:06:34.237932 kubelet[2321]: E0120 03:06:34.237517 2321 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:06:34.238164 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 03:06:34.240481 kubelet[2321]: E0120 03:06:34.240285 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 03:06:34.250706 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 03:06:34.255672 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 03:06:34.266642 kubelet[2321]: E0120 03:06:34.266571 2321 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:06:34.266959 kubelet[2321]: I0120 03:06:34.266920 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:06:34.267322 kubelet[2321]: I0120 03:06:34.267214 2321 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:06:34.267528 kubelet[2321]: I0120 03:06:34.267512 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:06:34.268944 kubelet[2321]: E0120 03:06:34.268927 2321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:06:34.269090 kubelet[2321]: E0120 03:06:34.269074 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 03:06:34.352573 systemd[1]: Created slice kubepods-burstable-pod86ea56c35db882aab0834698278015b2.slice - libcontainer container kubepods-burstable-pod86ea56c35db882aab0834698278015b2.slice. Jan 20 03:06:34.369369 kubelet[2321]: I0120 03:06:34.369266 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:34.370115 kubelet[2321]: E0120 03:06:34.369478 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:34.370115 kubelet[2321]: E0120 03:06:34.369667 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jan 20 03:06:34.374402 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 03:06:34.376749 kubelet[2321]: E0120 03:06:34.376642 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:34.388859 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 03:06:34.391641 kubelet[2321]: E0120 03:06:34.391569 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:34.408287 kubelet[2321]: I0120 03:06:34.408213 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:34.408287 kubelet[2321]: I0120 03:06:34.408272 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:34.408287 kubelet[2321]: I0120 03:06:34.408293 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:34.408603 kubelet[2321]: I0120 03:06:34.408306 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:34.408603 kubelet[2321]: I0120 03:06:34.408350 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:34.408603 kubelet[2321]: I0120 03:06:34.408365 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:34.408603 kubelet[2321]: I0120 03:06:34.408381 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:34.408603 kubelet[2321]: I0120 03:06:34.408394 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:34.408790 kubelet[2321]: I0120 03:06:34.408407 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:34.409208 kubelet[2321]: E0120 03:06:34.409128 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Jan 20 03:06:34.572518 kubelet[2321]: I0120 03:06:34.572325 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:34.572827 kubelet[2321]: E0120 03:06:34.572784 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jan 20 03:06:34.673220 kubelet[2321]: E0120 03:06:34.673105 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:34.674283 containerd[1558]: time="2026-01-20T03:06:34.674090372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86ea56c35db882aab0834698278015b2,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:34.679824 kubelet[2321]: E0120 03:06:34.679794 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:34.680348 containerd[1558]: time="2026-01-20T03:06:34.680274538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:34.695634 kubelet[2321]: E0120 03:06:34.695598 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:34.696125 containerd[1558]: time="2026-01-20T03:06:34.696065210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:34.810838 kubelet[2321]: E0120 03:06:34.810768 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Jan 20 03:06:34.974374 kubelet[2321]: I0120 03:06:34.974173 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:34.974573 kubelet[2321]: E0120 03:06:34.974521 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jan 20 03:06:35.063176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984804847.mount: Deactivated successfully. Jan 20 03:06:35.070568 containerd[1558]: time="2026-01-20T03:06:35.070513150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:35.074232 containerd[1558]: time="2026-01-20T03:06:35.074175609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 03:06:35.077301 containerd[1558]: time="2026-01-20T03:06:35.077214989Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:35.079802 containerd[1558]: time="2026-01-20T03:06:35.079660734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:35.081274 containerd[1558]: time="2026-01-20T03:06:35.081212718Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:35.082354 containerd[1558]: time="2026-01-20T03:06:35.082310229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 03:06:35.083360 containerd[1558]: time="2026-01-20T03:06:35.083279152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 03:06:35.084826 containerd[1558]: time="2026-01-20T03:06:35.084786794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:35.086770 containerd[1558]: time="2026-01-20T03:06:35.086678120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 405.13483ms" Jan 20 03:06:35.087246 containerd[1558]: time="2026-01-20T03:06:35.087198478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 411.262365ms" Jan 20 03:06:35.093676 containerd[1558]: time="2026-01-20T03:06:35.093620669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 396.140936ms" Jan 20 03:06:35.101483 kubelet[2321]: E0120 03:06:35.101412 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 03:06:35.115760 kubelet[2321]: E0120 03:06:35.115692 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 03:06:35.129173 containerd[1558]: time="2026-01-20T03:06:35.129048196Z" level=info msg="connecting to shim ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774" address="unix:///run/containerd/s/69dd045827642038970a8f38c223d02412d759b62bccdf34231b979c4faeeabf" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:35.130071 containerd[1558]: time="2026-01-20T03:06:35.129959331Z" level=info msg="connecting to shim 0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9" address="unix:///run/containerd/s/026787a685cfdeb9390e56ba3a7d86cebdc656c82dcb649ce141a7dd1a64008c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:35.131642 containerd[1558]: time="2026-01-20T03:06:35.131547456Z" level=info msg="connecting to shim 68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a" address="unix:///run/containerd/s/276b2614c6872aa2065126b1e1aff35888f6469846be5ddf3bee4c8acb5ad494" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:35.168116 systemd[1]: Started cri-containerd-0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9.scope - libcontainer container 0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9. Jan 20 03:06:35.169674 systemd[1]: Started cri-containerd-68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a.scope - libcontainer container 68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a. Jan 20 03:06:35.171173 systemd[1]: Started cri-containerd-ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774.scope - libcontainer container ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774. Jan 20 03:06:35.233382 containerd[1558]: time="2026-01-20T03:06:35.232370631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9\"" Jan 20 03:06:35.234484 kubelet[2321]: E0120 03:06:35.234417 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:35.242165 containerd[1558]: time="2026-01-20T03:06:35.242113441Z" level=info msg="CreateContainer within sandbox \"0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 03:06:35.253542 containerd[1558]: time="2026-01-20T03:06:35.252421318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774\"" Jan 20 03:06:35.254179 kubelet[2321]: E0120 03:06:35.254107 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:35.264477 containerd[1558]: time="2026-01-20T03:06:35.264401315Z" level=info msg="Container 691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:35.266355 containerd[1558]: time="2026-01-20T03:06:35.266213974Z" level=info msg="CreateContainer within sandbox \"ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 03:06:35.266355 containerd[1558]: time="2026-01-20T03:06:35.266271798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86ea56c35db882aab0834698278015b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a\"" Jan 20 03:06:35.267187 kubelet[2321]: E0120 03:06:35.267146 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:35.273160 containerd[1558]: time="2026-01-20T03:06:35.273097559Z" level=info msg="CreateContainer within sandbox \"68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 03:06:35.277959 containerd[1558]: time="2026-01-20T03:06:35.277847856Z" level=info msg="CreateContainer within sandbox \"0fa304ebf05dde6631c7533a8c36af7952021cd4408d2f60d0c497b2ee230df9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0\"" Jan 20 03:06:35.278639 containerd[1558]: time="2026-01-20T03:06:35.278615957Z" level=info msg="StartContainer for \"691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0\"" Jan 20 03:06:35.280262 containerd[1558]: time="2026-01-20T03:06:35.280162157Z" level=info msg="connecting to shim 691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0" address="unix:///run/containerd/s/026787a685cfdeb9390e56ba3a7d86cebdc656c82dcb649ce141a7dd1a64008c" protocol=ttrpc version=3 Jan 20 03:06:35.283949 containerd[1558]: time="2026-01-20T03:06:35.283919256Z" level=info msg="Container 9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:35.287858 containerd[1558]: time="2026-01-20T03:06:35.287810262Z" level=info msg="Container 151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:35.299372 containerd[1558]: time="2026-01-20T03:06:35.299326025Z" level=info msg="CreateContainer within sandbox \"68cb2f5ad69e1af0d5316c9ffb68275c774dd454a8c19e7effcede58f21b4c8a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b\"" Jan 20 03:06:35.301373 containerd[1558]: time="2026-01-20T03:06:35.301248702Z" level=info msg="CreateContainer within sandbox \"ceaf515bff3d3f3a00c09bc514d6ec7829e8e8939be9c1fe0b0a086a53a1d774\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb\"" Jan 20 03:06:35.302463 containerd[1558]: time="2026-01-20T03:06:35.302106395Z" level=info msg="StartContainer for \"9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb\"" Jan 20 03:06:35.302593 containerd[1558]: time="2026-01-20T03:06:35.302193985Z" level=info msg="StartContainer for \"151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b\"" Jan 20 03:06:35.303827 containerd[1558]: time="2026-01-20T03:06:35.303729070Z" level=info msg="connecting to shim 151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b" address="unix:///run/containerd/s/276b2614c6872aa2065126b1e1aff35888f6469846be5ddf3bee4c8acb5ad494" protocol=ttrpc version=3 Jan 20 03:06:35.304208 containerd[1558]: time="2026-01-20T03:06:35.304127456Z" level=info msg="connecting to shim 9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb" address="unix:///run/containerd/s/69dd045827642038970a8f38c223d02412d759b62bccdf34231b979c4faeeabf" protocol=ttrpc version=3 Jan 20 03:06:35.305083 systemd[1]: Started cri-containerd-691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0.scope - libcontainer container 691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0. Jan 20 03:06:35.339112 systemd[1]: Started cri-containerd-9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb.scope - libcontainer container 9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb. Jan 20 03:06:35.361290 systemd[1]: Started cri-containerd-151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b.scope - libcontainer container 151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b. Jan 20 03:06:35.410599 containerd[1558]: time="2026-01-20T03:06:35.410482586Z" level=info msg="StartContainer for \"691ae6f261a4253f4059b76893e9f4f588efea1995388b6625490411c9180de0\" returns successfully" Jan 20 03:06:35.439393 kubelet[2321]: E0120 03:06:35.438788 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:06:35.444037 containerd[1558]: time="2026-01-20T03:06:35.443858630Z" level=info msg="StartContainer for \"9d5297f7e0c499d0ba3bf1823d0d0a9a8c8ef7bc624838d0425abd77be9479fb\" returns successfully" Jan 20 03:06:35.447758 containerd[1558]: time="2026-01-20T03:06:35.447721626Z" level=info msg="StartContainer for \"151fc6ca5d13c7b8e39732bd056adaa2e199a519919d8781264260ab1ede6d0b\" returns successfully" Jan 20 03:06:35.781214 kubelet[2321]: I0120 03:06:35.781169 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:36.260031 kubelet[2321]: E0120 03:06:36.259936 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:36.260202 kubelet[2321]: E0120 03:06:36.260120 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:36.271022 kubelet[2321]: E0120 03:06:36.270972 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:36.271718 kubelet[2321]: E0120 03:06:36.271405 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:36.271853 kubelet[2321]: E0120 03:06:36.271651 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:36.272096 kubelet[2321]: E0120 03:06:36.272040 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:37.136743 kubelet[2321]: E0120 03:06:37.136655 2321 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 03:06:37.233273 kubelet[2321]: I0120 03:06:37.233149 2321 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 03:06:37.233273 kubelet[2321]: E0120 03:06:37.233196 2321 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 03:06:37.246774 kubelet[2321]: E0120 03:06:37.246717 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:37.271568 kubelet[2321]: E0120 03:06:37.271474 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:37.271684 kubelet[2321]: E0120 03:06:37.271604 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:37.271684 kubelet[2321]: E0120 03:06:37.271664 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:37.271850 kubelet[2321]: E0120 03:06:37.271784 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:37.347826 kubelet[2321]: E0120 03:06:37.347678 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:37.448960 kubelet[2321]: E0120 03:06:37.448772 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:37.549019 kubelet[2321]: E0120 03:06:37.548951 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:37.649154 kubelet[2321]: E0120 03:06:37.649057 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:37.808053 kubelet[2321]: I0120 03:06:37.807725 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:37.814023 kubelet[2321]: E0120 03:06:37.813970 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:37.814023 kubelet[2321]: I0120 03:06:37.814002 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:37.815527 kubelet[2321]: E0120 03:06:37.815463 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:37.815527 kubelet[2321]: I0120 03:06:37.815502 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:37.816828 kubelet[2321]: E0120 03:06:37.816752 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:38.190839 kubelet[2321]: I0120 03:06:38.190698 2321 apiserver.go:52] "Watching apiserver" Jan 20 03:06:38.207099 kubelet[2321]: I0120 03:06:38.206983 2321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 03:06:38.271807 kubelet[2321]: I0120 03:06:38.271753 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:38.276585 kubelet[2321]: E0120 03:06:38.276481 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:38.709035 kubelet[2321]: I0120 03:06:38.708963 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:38.715972 kubelet[2321]: E0120 03:06:38.715862 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:39.275063 kubelet[2321]: E0120 03:06:39.274968 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:39.275566 kubelet[2321]: E0120 03:06:39.275317 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:39.695245 systemd[1]: Reload requested from client PID 2613 ('systemctl') (unit session-7.scope)... Jan 20 03:06:39.695279 systemd[1]: Reloading... Jan 20 03:06:39.793027 zram_generator::config[2656]: No configuration found. Jan 20 03:06:39.891092 kubelet[2321]: I0120 03:06:39.891039 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:39.900046 kubelet[2321]: E0120 03:06:39.899871 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:40.036408 systemd[1]: Reloading finished in 340 ms. Jan 20 03:06:40.068707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:40.090961 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 03:06:40.091421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:40.091520 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 125.1M memory peak. Jan 20 03:06:40.094455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:40.347388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:40.363867 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:06:40.414169 kubelet[2701]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:06:40.414169 kubelet[2701]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:40.414621 kubelet[2701]: I0120 03:06:40.414262 2701 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:06:40.424707 kubelet[2701]: I0120 03:06:40.424641 2701 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 03:06:40.424707 kubelet[2701]: I0120 03:06:40.424686 2701 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:06:40.424707 kubelet[2701]: I0120 03:06:40.424717 2701 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 03:06:40.424840 kubelet[2701]: I0120 03:06:40.424723 2701 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:06:40.425106 kubelet[2701]: I0120 03:06:40.425040 2701 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:06:40.428565 kubelet[2701]: I0120 03:06:40.427835 2701 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 03:06:40.430633 kubelet[2701]: I0120 03:06:40.430617 2701 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:06:40.435464 kubelet[2701]: I0120 03:06:40.435429 2701 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:06:40.444256 kubelet[2701]: I0120 03:06:40.444194 2701 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 03:06:40.444661 kubelet[2701]: I0120 03:06:40.444542 2701 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:06:40.444815 kubelet[2701]: I0120 03:06:40.444585 2701 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:06:40.444815 kubelet[2701]: I0120 03:06:40.444811 2701 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:06:40.445132 kubelet[2701]: I0120 03:06:40.444825 2701 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 03:06:40.445132 kubelet[2701]: I0120 03:06:40.444861 2701 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 03:06:40.445895 kubelet[2701]: I0120 03:06:40.445784 2701 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:40.446182 kubelet[2701]: I0120 03:06:40.446114 2701 kubelet.go:475] "Attempting to sync node with API server" Jan 20 03:06:40.446182 kubelet[2701]: I0120 03:06:40.446176 2701 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:06:40.446277 kubelet[2701]: I0120 03:06:40.446204 2701 kubelet.go:387] "Adding apiserver pod source" Jan 20 03:06:40.446277 kubelet[2701]: I0120 03:06:40.446228 2701 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:06:40.449285 kubelet[2701]: I0120 03:06:40.449150 2701 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:06:40.450231 kubelet[2701]: I0120 03:06:40.450188 2701 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:06:40.450231 kubelet[2701]: I0120 03:06:40.450246 2701 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 03:06:40.456495 kubelet[2701]: I0120 03:06:40.456402 2701 server.go:1262] "Started kubelet" Jan 20 03:06:40.459756 kubelet[2701]: I0120 03:06:40.457069 2701 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:06:40.463972 kubelet[2701]: I0120 03:06:40.463308 2701 server.go:310] "Adding debug handlers to kubelet server" Jan 20 03:06:40.463972 kubelet[2701]: I0120 03:06:40.463536 2701 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:06:40.464183 kubelet[2701]: I0120 03:06:40.457610 2701 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:06:40.464248 kubelet[2701]: I0120 03:06:40.464214 2701 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 03:06:40.464555 kubelet[2701]: I0120 03:06:40.464480 2701 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:06:40.464714 kubelet[2701]: I0120 03:06:40.464663 2701 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:06:40.465138 kubelet[2701]: I0120 03:06:40.465008 2701 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 03:06:40.466355 kubelet[2701]: I0120 03:06:40.465401 2701 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 03:06:40.466355 kubelet[2701]: I0120 03:06:40.465591 2701 reconciler.go:29] "Reconciler: start to sync state" Jan 20 03:06:40.472315 kubelet[2701]: E0120 03:06:40.471652 2701 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:40.475204 kubelet[2701]: I0120 03:06:40.475184 2701 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:06:40.475390 kubelet[2701]: I0120 03:06:40.475373 2701 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:06:40.475718 kubelet[2701]: E0120 03:06:40.475565 2701 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:06:40.477939 kubelet[2701]: I0120 03:06:40.477864 2701 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:06:40.505014 kubelet[2701]: I0120 03:06:40.504946 2701 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 03:06:40.509306 kubelet[2701]: I0120 03:06:40.509231 2701 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 03:06:40.509306 kubelet[2701]: I0120 03:06:40.509295 2701 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 03:06:40.509630 kubelet[2701]: I0120 03:06:40.509573 2701 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 03:06:40.509764 kubelet[2701]: E0120 03:06:40.509666 2701 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:06:40.553088 kubelet[2701]: I0120 03:06:40.552945 2701 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:06:40.553088 kubelet[2701]: I0120 03:06:40.553054 2701 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:06:40.553088 kubelet[2701]: I0120 03:06:40.553078 2701 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:40.553341 kubelet[2701]: I0120 03:06:40.553321 2701 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 03:06:40.553374 kubelet[2701]: I0120 03:06:40.553337 2701 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 03:06:40.553374 kubelet[2701]: I0120 03:06:40.553365 2701 policy_none.go:49] "None policy: Start" Jan 20 03:06:40.553430 kubelet[2701]: I0120 03:06:40.553379 2701 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 03:06:40.553430 kubelet[2701]: I0120 03:06:40.553396 2701 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 03:06:40.553593 kubelet[2701]: I0120 03:06:40.553525 2701 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 03:06:40.553593 kubelet[2701]: I0120 03:06:40.553570 2701 policy_none.go:47] "Start" Jan 20 03:06:40.561527 kubelet[2701]: E0120 03:06:40.561451 2701 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:06:40.561831 kubelet[2701]: I0120 03:06:40.561762 2701 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:06:40.561831 kubelet[2701]: I0120 03:06:40.561807 2701 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:06:40.562849 kubelet[2701]: I0120 03:06:40.562795 2701 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:06:40.564824 kubelet[2701]: E0120 03:06:40.564710 2701 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:06:40.611014 kubelet[2701]: I0120 03:06:40.610762 2701 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:40.611266 kubelet[2701]: I0120 03:06:40.611095 2701 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.611916 kubelet[2701]: I0120 03:06:40.611820 2701 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:40.619635 kubelet[2701]: E0120 03:06:40.619525 2701 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:40.620652 kubelet[2701]: E0120 03:06:40.620520 2701 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:40.620652 kubelet[2701]: E0120 03:06:40.620641 2701 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.668231 kubelet[2701]: I0120 03:06:40.668190 2701 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:40.677221 kubelet[2701]: I0120 03:06:40.677105 2701 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 03:06:40.677373 kubelet[2701]: I0120 03:06:40.677228 2701 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 03:06:40.766589 kubelet[2701]: I0120 03:06:40.766423 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:40.766589 kubelet[2701]: I0120 03:06:40.766504 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:40.766589 kubelet[2701]: I0120 03:06:40.766532 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.766589 kubelet[2701]: I0120 03:06:40.766579 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.766589 kubelet[2701]: I0120 03:06:40.766622 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.766965 kubelet[2701]: I0120 03:06:40.766655 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.766965 kubelet[2701]: I0120 03:06:40.766693 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:40.766965 kubelet[2701]: I0120 03:06:40.766721 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86ea56c35db882aab0834698278015b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86ea56c35db882aab0834698278015b2\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:40.766965 kubelet[2701]: I0120 03:06:40.766746 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:40.921298 kubelet[2701]: E0120 03:06:40.920741 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:40.921298 kubelet[2701]: E0120 03:06:40.920779 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:40.921298 kubelet[2701]: E0120 03:06:40.921015 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:41.448088 kubelet[2701]: I0120 03:06:41.448018 2701 apiserver.go:52] "Watching apiserver" Jan 20 03:06:41.465569 kubelet[2701]: I0120 03:06:41.465514 2701 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 03:06:41.536678 kubelet[2701]: I0120 03:06:41.536582 2701 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:41.536832 kubelet[2701]: E0120 03:06:41.536729 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:41.536984 kubelet[2701]: I0120 03:06:41.536935 2701 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:41.546352 kubelet[2701]: E0120 03:06:41.546211 2701 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:41.546655 kubelet[2701]: E0120 03:06:41.546573 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:41.547153 kubelet[2701]: E0120 03:06:41.547070 2701 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:41.547488 kubelet[2701]: E0120 03:06:41.547421 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:41.573283 kubelet[2701]: I0120 03:06:41.573158 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.573141006 podStartE2EDuration="2.573141006s" podCreationTimestamp="2026-01-20 03:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:41.563763017 +0000 UTC m=+1.194555560" watchObservedRunningTime="2026-01-20 03:06:41.573141006 +0000 UTC m=+1.203933539" Jan 20 03:06:41.582111 kubelet[2701]: I0120 03:06:41.582056 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.582044409 podStartE2EDuration="3.582044409s" podCreationTimestamp="2026-01-20 03:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:41.581809186 +0000 UTC m=+1.212601720" watchObservedRunningTime="2026-01-20 03:06:41.582044409 +0000 UTC m=+1.212836942" Jan 20 03:06:41.582306 kubelet[2701]: I0120 03:06:41.582156 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.582153522 podStartE2EDuration="3.582153522s" podCreationTimestamp="2026-01-20 03:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:41.573521486 +0000 UTC m=+1.204314019" watchObservedRunningTime="2026-01-20 03:06:41.582153522 +0000 UTC m=+1.212946055" Jan 20 03:06:42.539231 kubelet[2701]: E0120 03:06:42.539162 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:42.539683 kubelet[2701]: E0120 03:06:42.539308 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:43.541118 kubelet[2701]: E0120 03:06:43.541058 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:43.541693 kubelet[2701]: E0120 03:06:43.541318 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:45.319433 kubelet[2701]: I0120 03:06:45.319391 2701 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 03:06:45.320211 containerd[1558]: time="2026-01-20T03:06:45.319838180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 03:06:45.320519 kubelet[2701]: I0120 03:06:45.320249 2701 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 03:06:45.999254 systemd[1]: Created slice kubepods-besteffort-pode9140020_9dbf_474e_b7df_b00e7461d7f3.slice - libcontainer container kubepods-besteffort-pode9140020_9dbf_474e_b7df_b00e7461d7f3.slice. Jan 20 03:06:46.005767 kubelet[2701]: I0120 03:06:46.005677 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9140020-9dbf-474e-b7df-b00e7461d7f3-lib-modules\") pod \"kube-proxy-kbvwt\" (UID: \"e9140020-9dbf-474e-b7df-b00e7461d7f3\") " pod="kube-system/kube-proxy-kbvwt" Jan 20 03:06:46.005997 kubelet[2701]: I0120 03:06:46.005796 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtc6h\" (UniqueName: \"kubernetes.io/projected/e9140020-9dbf-474e-b7df-b00e7461d7f3-kube-api-access-xtc6h\") pod \"kube-proxy-kbvwt\" (UID: \"e9140020-9dbf-474e-b7df-b00e7461d7f3\") " pod="kube-system/kube-proxy-kbvwt" Jan 20 03:06:46.005997 kubelet[2701]: I0120 03:06:46.005818 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9140020-9dbf-474e-b7df-b00e7461d7f3-kube-proxy\") pod \"kube-proxy-kbvwt\" (UID: \"e9140020-9dbf-474e-b7df-b00e7461d7f3\") " pod="kube-system/kube-proxy-kbvwt" Jan 20 03:06:46.005997 kubelet[2701]: I0120 03:06:46.005831 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9140020-9dbf-474e-b7df-b00e7461d7f3-xtables-lock\") pod \"kube-proxy-kbvwt\" (UID: \"e9140020-9dbf-474e-b7df-b00e7461d7f3\") " pod="kube-system/kube-proxy-kbvwt" Jan 20 03:06:46.114083 kubelet[2701]: E0120 03:06:46.114026 2701 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 03:06:46.114083 kubelet[2701]: E0120 03:06:46.114076 2701 projected.go:196] Error preparing data for projected volume kube-api-access-xtc6h for pod kube-system/kube-proxy-kbvwt: configmap "kube-root-ca.crt" not found Jan 20 03:06:46.114281 kubelet[2701]: E0120 03:06:46.114155 2701 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9140020-9dbf-474e-b7df-b00e7461d7f3-kube-api-access-xtc6h podName:e9140020-9dbf-474e-b7df-b00e7461d7f3 nodeName:}" failed. No retries permitted until 2026-01-20 03:06:46.614130715 +0000 UTC m=+6.244923249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtc6h" (UniqueName: "kubernetes.io/projected/e9140020-9dbf-474e-b7df-b00e7461d7f3-kube-api-access-xtc6h") pod "kube-proxy-kbvwt" (UID: "e9140020-9dbf-474e-b7df-b00e7461d7f3") : configmap "kube-root-ca.crt" not found Jan 20 03:06:46.518766 systemd[1]: Created slice kubepods-besteffort-podce20bbc0_39e0_45ab_9f4f_c65d863a7518.slice - libcontainer container kubepods-besteffort-podce20bbc0_39e0_45ab_9f4f_c65d863a7518.slice. Jan 20 03:06:46.611626 kubelet[2701]: I0120 03:06:46.611521 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce20bbc0-39e0-45ab-9f4f-c65d863a7518-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-68c4l\" (UID: \"ce20bbc0-39e0-45ab-9f4f-c65d863a7518\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-68c4l" Jan 20 03:06:46.611626 kubelet[2701]: I0120 03:06:46.611567 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gp2t\" (UniqueName: \"kubernetes.io/projected/ce20bbc0-39e0-45ab-9f4f-c65d863a7518-kube-api-access-4gp2t\") pod \"tigera-operator-65cdcdfd6d-68c4l\" (UID: \"ce20bbc0-39e0-45ab-9f4f-c65d863a7518\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-68c4l" Jan 20 03:06:46.827067 containerd[1558]: time="2026-01-20T03:06:46.826812790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-68c4l,Uid:ce20bbc0-39e0-45ab-9f4f-c65d863a7518,Namespace:tigera-operator,Attempt:0,}" Jan 20 03:06:46.852135 containerd[1558]: time="2026-01-20T03:06:46.852005416Z" level=info msg="connecting to shim 29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582" address="unix:///run/containerd/s/32548f358906a69b0c0575be27e033e6e19ab3572401198a3b9028899bb1773a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:46.899255 systemd[1]: Started cri-containerd-29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582.scope - libcontainer container 29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582. Jan 20 03:06:46.912721 kubelet[2701]: E0120 03:06:46.912653 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.914352 containerd[1558]: time="2026-01-20T03:06:46.914273214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbvwt,Uid:e9140020-9dbf-474e-b7df-b00e7461d7f3,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:46.949211 containerd[1558]: time="2026-01-20T03:06:46.949100964Z" level=info msg="connecting to shim 11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313" address="unix:///run/containerd/s/8b90bddc9351e66af433a5d53594b797ae6ed03b47633bf1adf500202cf6ddca" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:46.981591 containerd[1558]: time="2026-01-20T03:06:46.981506465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-68c4l,Uid:ce20bbc0-39e0-45ab-9f4f-c65d863a7518,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582\"" Jan 20 03:06:46.985742 containerd[1558]: time="2026-01-20T03:06:46.985069728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 03:06:46.988271 systemd[1]: Started cri-containerd-11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313.scope - libcontainer container 11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313. Jan 20 03:06:47.024403 containerd[1558]: time="2026-01-20T03:06:47.024347109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbvwt,Uid:e9140020-9dbf-474e-b7df-b00e7461d7f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313\"" Jan 20 03:06:47.025704 kubelet[2701]: E0120 03:06:47.025410 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.031661 containerd[1558]: time="2026-01-20T03:06:47.031612157Z" level=info msg="CreateContainer within sandbox \"11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 03:06:47.045443 containerd[1558]: time="2026-01-20T03:06:47.045366540Z" level=info msg="Container a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:47.054154 containerd[1558]: time="2026-01-20T03:06:47.054052059Z" level=info msg="CreateContainer within sandbox \"11bc5ee3520ed282c730ffffed342645b1e662c24f2975fd207edaf5d9d49313\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200\"" Jan 20 03:06:47.055587 containerd[1558]: time="2026-01-20T03:06:47.055479581Z" level=info msg="StartContainer for \"a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200\"" Jan 20 03:06:47.058042 containerd[1558]: time="2026-01-20T03:06:47.057941487Z" level=info msg="connecting to shim a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200" address="unix:///run/containerd/s/8b90bddc9351e66af433a5d53594b797ae6ed03b47633bf1adf500202cf6ddca" protocol=ttrpc version=3 Jan 20 03:06:47.080174 systemd[1]: Started cri-containerd-a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200.scope - libcontainer container a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200. Jan 20 03:06:47.183248 containerd[1558]: time="2026-01-20T03:06:47.183096397Z" level=info msg="StartContainer for \"a1986ce0cb53c9d1c522c791e7f0c8648a7f77452bbc13b0a95331d645504200\" returns successfully" Jan 20 03:06:47.502262 kubelet[2701]: E0120 03:06:47.502141 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.558371 kubelet[2701]: E0120 03:06:47.557686 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.558965 kubelet[2701]: E0120 03:06:47.558539 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.574455 kubelet[2701]: I0120 03:06:47.574316 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbvwt" podStartSLOduration=2.574228326 podStartE2EDuration="2.574228326s" podCreationTimestamp="2026-01-20 03:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:47.573598826 +0000 UTC m=+7.204391359" watchObservedRunningTime="2026-01-20 03:06:47.574228326 +0000 UTC m=+7.205020899" Jan 20 03:06:48.530814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268928903.mount: Deactivated successfully. Jan 20 03:06:48.613474 kubelet[2701]: E0120 03:06:48.613312 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:49.561840 kubelet[2701]: E0120 03:06:49.561750 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:50.564582 kubelet[2701]: E0120 03:06:50.564435 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:52.260320 kubelet[2701]: E0120 03:06:52.260203 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:52.455999 containerd[1558]: time="2026-01-20T03:06:52.455847394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:52.457233 containerd[1558]: time="2026-01-20T03:06:52.457166504Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 03:06:52.458773 containerd[1558]: time="2026-01-20T03:06:52.458626644Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:52.461139 containerd[1558]: time="2026-01-20T03:06:52.461091930Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:52.461860 containerd[1558]: time="2026-01-20T03:06:52.461803791Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.47661234s" Jan 20 03:06:52.461860 containerd[1558]: time="2026-01-20T03:06:52.461855378Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 03:06:52.469136 containerd[1558]: time="2026-01-20T03:06:52.469093296Z" level=info msg="CreateContainer within sandbox \"29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 03:06:52.479739 containerd[1558]: time="2026-01-20T03:06:52.479651873Z" level=info msg="Container dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:52.489390 containerd[1558]: time="2026-01-20T03:06:52.489334371Z" level=info msg="CreateContainer within sandbox \"29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2\"" Jan 20 03:06:52.490151 containerd[1558]: time="2026-01-20T03:06:52.490122840Z" level=info msg="StartContainer for \"dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2\"" Jan 20 03:06:52.491022 containerd[1558]: time="2026-01-20T03:06:52.490963317Z" level=info msg="connecting to shim dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2" address="unix:///run/containerd/s/32548f358906a69b0c0575be27e033e6e19ab3572401198a3b9028899bb1773a" protocol=ttrpc version=3 Jan 20 03:06:52.553284 systemd[1]: Started cri-containerd-dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2.scope - libcontainer container dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2. Jan 20 03:06:52.602251 containerd[1558]: time="2026-01-20T03:06:52.602153721Z" level=info msg="StartContainer for \"dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2\" returns successfully" Jan 20 03:06:54.870243 systemd[1]: cri-containerd-dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2.scope: Deactivated successfully. Jan 20 03:06:54.877829 containerd[1558]: time="2026-01-20T03:06:54.877780432Z" level=info msg="received container exit event container_id:\"dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2\" id:\"dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2\" pid:3044 exit_status:1 exited_at:{seconds:1768878414 nanos:877175477}" Jan 20 03:06:54.913617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2-rootfs.mount: Deactivated successfully. Jan 20 03:06:55.585289 kubelet[2701]: I0120 03:06:55.585256 2701 scope.go:117] "RemoveContainer" containerID="dd35f94747e4e2155e5a94884ada8342067807f1291497e12ddd5a92976f7bc2" Jan 20 03:06:55.602932 containerd[1558]: time="2026-01-20T03:06:55.601030824Z" level=info msg="CreateContainer within sandbox \"29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 03:06:55.613963 containerd[1558]: time="2026-01-20T03:06:55.613827911Z" level=info msg="Container b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:55.627106 containerd[1558]: time="2026-01-20T03:06:55.627004159Z" level=info msg="CreateContainer within sandbox \"29f912782d57c4f2ea13b71a8aadcc1093d5344b41016d9cfb60d671374d4582\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8\"" Jan 20 03:06:55.628203 containerd[1558]: time="2026-01-20T03:06:55.628160702Z" level=info msg="StartContainer for \"b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8\"" Jan 20 03:06:55.629361 containerd[1558]: time="2026-01-20T03:06:55.629319267Z" level=info msg="connecting to shim b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8" address="unix:///run/containerd/s/32548f358906a69b0c0575be27e033e6e19ab3572401198a3b9028899bb1773a" protocol=ttrpc version=3 Jan 20 03:06:55.663144 systemd[1]: Started cri-containerd-b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8.scope - libcontainer container b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8. Jan 20 03:06:55.715003 containerd[1558]: time="2026-01-20T03:06:55.714944151Z" level=info msg="StartContainer for \"b0b42a23e6fbac67fb59d87ace85d6ed8be2f9f25cfd689a2f8962e5ec5c54f8\" returns successfully" Jan 20 03:06:56.599501 kubelet[2701]: I0120 03:06:56.599383 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-68c4l" podStartSLOduration=5.121019465 podStartE2EDuration="10.599363421s" podCreationTimestamp="2026-01-20 03:06:46 +0000 UTC" firstStartedPulling="2026-01-20 03:06:46.98440868 +0000 UTC m=+6.615201214" lastFinishedPulling="2026-01-20 03:06:52.462752637 +0000 UTC m=+12.093545170" observedRunningTime="2026-01-20 03:06:53.585964376 +0000 UTC m=+13.216756919" watchObservedRunningTime="2026-01-20 03:06:56.599363421 +0000 UTC m=+16.230155954" Jan 20 03:06:58.132563 update_engine[1544]: I20260120 03:06:58.132401 1544 update_attempter.cc:509] Updating boot flags... Jan 20 03:06:58.577742 sudo[1760]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:58.579592 sshd[1759]: Connection closed by 10.0.0.1 port 38992 Jan 20 03:06:58.580076 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:58.584432 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:38992.service: Deactivated successfully. Jan 20 03:06:58.586985 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 03:06:58.587265 systemd[1]: session-7.scope: Consumed 6.085s CPU time, 227.7M memory peak. Jan 20 03:06:58.589637 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Jan 20 03:06:58.591295 systemd-logind[1542]: Removed session 7. Jan 20 03:07:03.309484 systemd[1]: Created slice kubepods-besteffort-pod268b3af1_bc5a_4b70_80dc_496ed3e92990.slice - libcontainer container kubepods-besteffort-pod268b3af1_bc5a_4b70_80dc_496ed3e92990.slice. Jan 20 03:07:03.333211 kubelet[2701]: I0120 03:07:03.332986 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmv42\" (UniqueName: \"kubernetes.io/projected/268b3af1-bc5a-4b70-80dc-496ed3e92990-kube-api-access-cmv42\") pod \"calico-typha-659d88cc6b-xtpjf\" (UID: \"268b3af1-bc5a-4b70-80dc-496ed3e92990\") " pod="calico-system/calico-typha-659d88cc6b-xtpjf" Jan 20 03:07:03.333211 kubelet[2701]: I0120 03:07:03.333050 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b3af1-bc5a-4b70-80dc-496ed3e92990-tigera-ca-bundle\") pod \"calico-typha-659d88cc6b-xtpjf\" (UID: \"268b3af1-bc5a-4b70-80dc-496ed3e92990\") " pod="calico-system/calico-typha-659d88cc6b-xtpjf" Jan 20 03:07:03.333211 kubelet[2701]: I0120 03:07:03.333067 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/268b3af1-bc5a-4b70-80dc-496ed3e92990-typha-certs\") pod \"calico-typha-659d88cc6b-xtpjf\" (UID: \"268b3af1-bc5a-4b70-80dc-496ed3e92990\") " pod="calico-system/calico-typha-659d88cc6b-xtpjf" Jan 20 03:07:03.517760 systemd[1]: Created slice kubepods-besteffort-pod545da2aa_0b27_49bc_8d20_f3906cea48ac.slice - libcontainer container kubepods-besteffort-pod545da2aa_0b27_49bc_8d20_f3906cea48ac.slice. Jan 20 03:07:03.534731 kubelet[2701]: I0120 03:07:03.534608 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/545da2aa-0b27-49bc-8d20-f3906cea48ac-tigera-ca-bundle\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.534731 kubelet[2701]: I0120 03:07:03.534716 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-var-lib-calico\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.534731 kubelet[2701]: I0120 03:07:03.534742 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-xtables-lock\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535090 kubelet[2701]: I0120 03:07:03.534767 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/545da2aa-0b27-49bc-8d20-f3906cea48ac-node-certs\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535090 kubelet[2701]: I0120 03:07:03.534953 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-lib-modules\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535090 kubelet[2701]: I0120 03:07:03.534978 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-cni-bin-dir\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535090 kubelet[2701]: I0120 03:07:03.534998 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-cni-net-dir\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535090 kubelet[2701]: I0120 03:07:03.535019 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-flexvol-driver-host\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535291 kubelet[2701]: I0120 03:07:03.535057 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-var-run-calico\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535463 kubelet[2701]: I0120 03:07:03.535355 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-policysync\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535463 kubelet[2701]: I0120 03:07:03.535408 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stx7q\" (UniqueName: \"kubernetes.io/projected/545da2aa-0b27-49bc-8d20-f3906cea48ac-kube-api-access-stx7q\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.535463 kubelet[2701]: I0120 03:07:03.535441 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/545da2aa-0b27-49bc-8d20-f3906cea48ac-cni-log-dir\") pod \"calico-node-lc75q\" (UID: \"545da2aa-0b27-49bc-8d20-f3906cea48ac\") " pod="calico-system/calico-node-lc75q" Jan 20 03:07:03.619407 kubelet[2701]: E0120 03:07:03.619208 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:03.619940 containerd[1558]: time="2026-01-20T03:07:03.619753515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659d88cc6b-xtpjf,Uid:268b3af1-bc5a-4b70-80dc-496ed3e92990,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:03.662160 kubelet[2701]: E0120 03:07:03.662034 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.662160 kubelet[2701]: W0120 03:07:03.662061 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.662160 kubelet[2701]: E0120 03:07:03.662086 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.668915 containerd[1558]: time="2026-01-20T03:07:03.668755443Z" level=info msg="connecting to shim 9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782" address="unix:///run/containerd/s/ccad0f6d44bf3a387d470503048b0ffb933a02ac724bd1ec03dcef99142503d2" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:03.707969 systemd[1]: Started cri-containerd-9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782.scope - libcontainer container 9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782. Jan 20 03:07:03.799700 kubelet[2701]: E0120 03:07:03.799196 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:03.818215 containerd[1558]: time="2026-01-20T03:07:03.818134750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659d88cc6b-xtpjf,Uid:268b3af1-bc5a-4b70-80dc-496ed3e92990,Namespace:calico-system,Attempt:0,} returns sandbox id \"9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782\"" Jan 20 03:07:03.823167 kubelet[2701]: E0120 03:07:03.823113 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:03.824911 kubelet[2701]: E0120 03:07:03.824843 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:03.825914 kubelet[2701]: E0120 03:07:03.825802 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.825914 kubelet[2701]: W0120 03:07:03.825820 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.825914 kubelet[2701]: E0120 03:07:03.825841 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.826441 containerd[1558]: time="2026-01-20T03:07:03.826373440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lc75q,Uid:545da2aa-0b27-49bc-8d20-f3906cea48ac,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:03.828417 kubelet[2701]: E0120 03:07:03.828093 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.828417 kubelet[2701]: W0120 03:07:03.828406 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.828930 kubelet[2701]: E0120 03:07:03.828426 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.832259 containerd[1558]: time="2026-01-20T03:07:03.832184370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 03:07:03.834474 kubelet[2701]: E0120 03:07:03.834402 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.835229 kubelet[2701]: W0120 03:07:03.834439 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.836572 kubelet[2701]: E0120 03:07:03.835241 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.838371 kubelet[2701]: E0120 03:07:03.838273 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.840029 kubelet[2701]: W0120 03:07:03.839949 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.840029 kubelet[2701]: E0120 03:07:03.839994 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.843632 kubelet[2701]: E0120 03:07:03.843610 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.843632 kubelet[2701]: W0120 03:07:03.843629 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.843961 kubelet[2701]: E0120 03:07:03.843783 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.845220 kubelet[2701]: E0120 03:07:03.845071 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.845220 kubelet[2701]: W0120 03:07:03.845089 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.845220 kubelet[2701]: E0120 03:07:03.845111 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.848391 kubelet[2701]: E0120 03:07:03.848336 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.848606 kubelet[2701]: W0120 03:07:03.848591 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.848912 kubelet[2701]: E0120 03:07:03.848770 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.849346 kubelet[2701]: E0120 03:07:03.849332 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.849463 kubelet[2701]: W0120 03:07:03.849450 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.849567 kubelet[2701]: E0120 03:07:03.849556 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.850266 kubelet[2701]: E0120 03:07:03.850201 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.850266 kubelet[2701]: W0120 03:07:03.850212 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.850266 kubelet[2701]: E0120 03:07:03.850224 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.851081 kubelet[2701]: E0120 03:07:03.851069 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.851182 kubelet[2701]: W0120 03:07:03.851131 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.851182 kubelet[2701]: E0120 03:07:03.851144 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.853513 kubelet[2701]: E0120 03:07:03.853371 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.853513 kubelet[2701]: W0120 03:07:03.853386 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.853513 kubelet[2701]: E0120 03:07:03.853398 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.854246 kubelet[2701]: E0120 03:07:03.854203 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.854310 kubelet[2701]: W0120 03:07:03.854294 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.854408 kubelet[2701]: E0120 03:07:03.854363 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.856578 kubelet[2701]: E0120 03:07:03.856564 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.856693 kubelet[2701]: W0120 03:07:03.856636 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.856693 kubelet[2701]: E0120 03:07:03.856652 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.857258 kubelet[2701]: E0120 03:07:03.857225 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.857258 kubelet[2701]: W0120 03:07:03.857237 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.857258 kubelet[2701]: E0120 03:07:03.857247 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.857912 kubelet[2701]: E0120 03:07:03.857856 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.857912 kubelet[2701]: W0120 03:07:03.857867 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.858025 kubelet[2701]: E0120 03:07:03.858012 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.858440 kubelet[2701]: I0120 03:07:03.858153 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb663801-c52b-48d5-9ddb-4fcd0f5aab67-kubelet-dir\") pod \"csi-node-driver-hgb8x\" (UID: \"bb663801-c52b-48d5-9ddb-4fcd0f5aab67\") " pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:03.859647 kubelet[2701]: E0120 03:07:03.859635 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.859811 kubelet[2701]: W0120 03:07:03.859759 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.859811 kubelet[2701]: E0120 03:07:03.859774 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.860973 kubelet[2701]: E0120 03:07:03.860958 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.861583 kubelet[2701]: W0120 03:07:03.861278 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.861583 kubelet[2701]: E0120 03:07:03.861311 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.863391 kubelet[2701]: E0120 03:07:03.863127 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.863533 kubelet[2701]: W0120 03:07:03.863463 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.863533 kubelet[2701]: E0120 03:07:03.863479 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.864096 kubelet[2701]: I0120 03:07:03.864080 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb663801-c52b-48d5-9ddb-4fcd0f5aab67-socket-dir\") pod \"csi-node-driver-hgb8x\" (UID: \"bb663801-c52b-48d5-9ddb-4fcd0f5aab67\") " pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:03.865920 kubelet[2701]: E0120 03:07:03.864608 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.866191 kubelet[2701]: W0120 03:07:03.865972 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.866191 kubelet[2701]: E0120 03:07:03.865990 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.868915 kubelet[2701]: E0120 03:07:03.868723 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.868915 kubelet[2701]: W0120 03:07:03.868741 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.868915 kubelet[2701]: E0120 03:07:03.868754 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.869181 kubelet[2701]: E0120 03:07:03.869168 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.869269 kubelet[2701]: W0120 03:07:03.869252 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.869404 kubelet[2701]: E0120 03:07:03.869391 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.870080 kubelet[2701]: E0120 03:07:03.869851 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.870295 kubelet[2701]: W0120 03:07:03.870170 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.870714 kubelet[2701]: E0120 03:07:03.870350 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.870714 kubelet[2701]: I0120 03:07:03.870623 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb663801-c52b-48d5-9ddb-4fcd0f5aab67-registration-dir\") pod \"csi-node-driver-hgb8x\" (UID: \"bb663801-c52b-48d5-9ddb-4fcd0f5aab67\") " pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:03.870784 kubelet[2701]: E0120 03:07:03.870772 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.870805 kubelet[2701]: W0120 03:07:03.870785 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.870805 kubelet[2701]: E0120 03:07:03.870799 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.871291 kubelet[2701]: E0120 03:07:03.871154 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.871291 kubelet[2701]: W0120 03:07:03.871172 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.871291 kubelet[2701]: E0120 03:07:03.871184 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.871581 kubelet[2701]: E0120 03:07:03.871557 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.871581 kubelet[2701]: W0120 03:07:03.871566 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.871581 kubelet[2701]: E0120 03:07:03.871575 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.872352 kubelet[2701]: E0120 03:07:03.871971 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.872352 kubelet[2701]: W0120 03:07:03.871986 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.872352 kubelet[2701]: E0120 03:07:03.871999 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.873311 kubelet[2701]: E0120 03:07:03.872389 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.873311 kubelet[2701]: W0120 03:07:03.872398 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.873311 kubelet[2701]: E0120 03:07:03.872407 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.873311 kubelet[2701]: E0120 03:07:03.872922 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.873311 kubelet[2701]: W0120 03:07:03.872930 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.873311 kubelet[2701]: E0120 03:07:03.872938 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.873856 kubelet[2701]: E0120 03:07:03.873322 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.873856 kubelet[2701]: W0120 03:07:03.873330 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.873856 kubelet[2701]: E0120 03:07:03.873341 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.887239 containerd[1558]: time="2026-01-20T03:07:03.887155295Z" level=info msg="connecting to shim 2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496" address="unix:///run/containerd/s/4436940a750d87026a9791a990a81d157280f01c199e8baba15ec99454a3fb16" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:03.939200 systemd[1]: Started cri-containerd-2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496.scope - libcontainer container 2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496. Jan 20 03:07:03.974870 kubelet[2701]: E0120 03:07:03.974768 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.975199 kubelet[2701]: W0120 03:07:03.975103 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.975249 kubelet[2701]: E0120 03:07:03.975218 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.977696 kubelet[2701]: E0120 03:07:03.977554 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.977696 kubelet[2701]: W0120 03:07:03.977592 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.977696 kubelet[2701]: E0120 03:07:03.977612 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.979371 kubelet[2701]: E0120 03:07:03.979333 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.979371 kubelet[2701]: W0120 03:07:03.979351 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.979371 kubelet[2701]: E0120 03:07:03.979370 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.979825 kubelet[2701]: E0120 03:07:03.979777 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.979825 kubelet[2701]: W0120 03:07:03.979790 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.979825 kubelet[2701]: E0120 03:07:03.979804 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.980152 kubelet[2701]: I0120 03:07:03.979847 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7nr\" (UniqueName: \"kubernetes.io/projected/bb663801-c52b-48d5-9ddb-4fcd0f5aab67-kube-api-access-hk7nr\") pod \"csi-node-driver-hgb8x\" (UID: \"bb663801-c52b-48d5-9ddb-4fcd0f5aab67\") " pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:03.980402 kubelet[2701]: E0120 03:07:03.980383 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.980531 kubelet[2701]: W0120 03:07:03.980400 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.980531 kubelet[2701]: E0120 03:07:03.980415 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.981706 kubelet[2701]: E0120 03:07:03.981575 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.981706 kubelet[2701]: W0120 03:07:03.981644 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.981706 kubelet[2701]: E0120 03:07:03.981694 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.982213 kubelet[2701]: E0120 03:07:03.982191 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.982213 kubelet[2701]: W0120 03:07:03.982207 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.982461 kubelet[2701]: E0120 03:07:03.982219 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.982461 kubelet[2701]: I0120 03:07:03.982393 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb663801-c52b-48d5-9ddb-4fcd0f5aab67-varrun\") pod \"csi-node-driver-hgb8x\" (UID: \"bb663801-c52b-48d5-9ddb-4fcd0f5aab67\") " pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:03.983313 kubelet[2701]: E0120 03:07:03.983279 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.983313 kubelet[2701]: W0120 03:07:03.983309 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.983313 kubelet[2701]: E0120 03:07:03.983323 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.983763 kubelet[2701]: E0120 03:07:03.983740 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.983763 kubelet[2701]: W0120 03:07:03.983761 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.983857 kubelet[2701]: E0120 03:07:03.983803 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.984343 kubelet[2701]: E0120 03:07:03.984264 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.984343 kubelet[2701]: W0120 03:07:03.984326 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.984343 kubelet[2701]: E0120 03:07:03.984338 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.985049 kubelet[2701]: E0120 03:07:03.984998 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.985049 kubelet[2701]: W0120 03:07:03.985021 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.985049 kubelet[2701]: E0120 03:07:03.985031 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.985847 kubelet[2701]: E0120 03:07:03.985719 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.985847 kubelet[2701]: W0120 03:07:03.985732 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.985847 kubelet[2701]: E0120 03:07:03.985741 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.986617 kubelet[2701]: E0120 03:07:03.986579 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.986654 kubelet[2701]: W0120 03:07:03.986640 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.986654 kubelet[2701]: E0120 03:07:03.986650 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.987375 kubelet[2701]: E0120 03:07:03.987291 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.987429 kubelet[2701]: W0120 03:07:03.987405 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.987429 kubelet[2701]: E0120 03:07:03.987419 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.989015 kubelet[2701]: E0120 03:07:03.988975 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.989015 kubelet[2701]: W0120 03:07:03.988990 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.989249 kubelet[2701]: E0120 03:07:03.989196 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.990701 kubelet[2701]: E0120 03:07:03.990609 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.990828 kubelet[2701]: W0120 03:07:03.990766 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.991014 kubelet[2701]: E0120 03:07:03.990799 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.991703 kubelet[2701]: E0120 03:07:03.991628 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.991758 kubelet[2701]: W0120 03:07:03.991742 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.991807 kubelet[2701]: E0120 03:07:03.991760 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.992719 kubelet[2701]: E0120 03:07:03.992502 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.992719 kubelet[2701]: W0120 03:07:03.992540 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.992719 kubelet[2701]: E0120 03:07:03.992555 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.993585 kubelet[2701]: E0120 03:07:03.993568 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.993972 kubelet[2701]: W0120 03:07:03.993751 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.993972 kubelet[2701]: E0120 03:07:03.993768 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.994540 kubelet[2701]: E0120 03:07:03.994458 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.994540 kubelet[2701]: W0120 03:07:03.994474 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.994540 kubelet[2701]: E0120 03:07:03.994488 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:03.995153 kubelet[2701]: E0120 03:07:03.995098 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:03.995153 kubelet[2701]: W0120 03:07:03.995110 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:03.995153 kubelet[2701]: E0120 03:07:03.995120 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.009500 containerd[1558]: time="2026-01-20T03:07:04.009381118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lc75q,Uid:545da2aa-0b27-49bc-8d20-f3906cea48ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\"" Jan 20 03:07:04.010797 kubelet[2701]: E0120 03:07:04.010726 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:04.093334 kubelet[2701]: E0120 03:07:04.093288 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.093491 kubelet[2701]: W0120 03:07:04.093397 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.093491 kubelet[2701]: E0120 03:07:04.093424 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.094050 kubelet[2701]: E0120 03:07:04.093974 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.094050 kubelet[2701]: W0120 03:07:04.094043 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.094136 kubelet[2701]: E0120 03:07:04.094060 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.094480 kubelet[2701]: E0120 03:07:04.094449 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.094583 kubelet[2701]: W0120 03:07:04.094482 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.094583 kubelet[2701]: E0120 03:07:04.094497 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.094941 kubelet[2701]: E0120 03:07:04.094853 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.095325 kubelet[2701]: W0120 03:07:04.094945 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.095325 kubelet[2701]: E0120 03:07:04.094959 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.095325 kubelet[2701]: E0120 03:07:04.095265 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.095325 kubelet[2701]: W0120 03:07:04.095278 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.095325 kubelet[2701]: E0120 03:07:04.095289 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.095747 kubelet[2701]: E0120 03:07:04.095649 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.095747 kubelet[2701]: W0120 03:07:04.095691 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.095747 kubelet[2701]: E0120 03:07:04.095701 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.096174 kubelet[2701]: E0120 03:07:04.096093 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.096174 kubelet[2701]: W0120 03:07:04.096110 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.096174 kubelet[2701]: E0120 03:07:04.096118 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.096589 kubelet[2701]: E0120 03:07:04.096501 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.096589 kubelet[2701]: W0120 03:07:04.096560 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.096589 kubelet[2701]: E0120 03:07:04.096576 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.096995 kubelet[2701]: E0120 03:07:04.096967 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.097079 kubelet[2701]: W0120 03:07:04.096996 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.097079 kubelet[2701]: E0120 03:07:04.097009 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.097458 kubelet[2701]: E0120 03:07:04.097391 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.097458 kubelet[2701]: W0120 03:07:04.097423 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.097458 kubelet[2701]: E0120 03:07:04.097437 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.109240 kubelet[2701]: E0120 03:07:04.109121 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:04.109240 kubelet[2701]: W0120 03:07:04.109170 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:04.109240 kubelet[2701]: E0120 03:07:04.109194 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:04.463084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088429359.mount: Deactivated successfully. Jan 20 03:07:05.217466 containerd[1558]: time="2026-01-20T03:07:05.217384084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:05.218849 containerd[1558]: time="2026-01-20T03:07:05.218803351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 03:07:05.220189 containerd[1558]: time="2026-01-20T03:07:05.220142312Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:05.222865 containerd[1558]: time="2026-01-20T03:07:05.222781079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:05.223591 containerd[1558]: time="2026-01-20T03:07:05.223533854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.39130519s" Jan 20 03:07:05.223591 containerd[1558]: time="2026-01-20T03:07:05.223570743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 03:07:05.229842 containerd[1558]: time="2026-01-20T03:07:05.229774342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 03:07:05.251949 containerd[1558]: time="2026-01-20T03:07:05.251607706Z" level=info msg="CreateContainer within sandbox \"9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 03:07:05.265976 containerd[1558]: time="2026-01-20T03:07:05.265865706Z" level=info msg="Container 2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:05.274860 containerd[1558]: time="2026-01-20T03:07:05.274763312Z" level=info msg="CreateContainer within sandbox \"9523f11e019000379fa4d45ef46cc2a77be6ee694a8a2037b059c912812d8782\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d\"" Jan 20 03:07:05.275915 containerd[1558]: time="2026-01-20T03:07:05.275805498Z" level=info msg="StartContainer for \"2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d\"" Jan 20 03:07:05.277943 containerd[1558]: time="2026-01-20T03:07:05.277860651Z" level=info msg="connecting to shim 2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d" address="unix:///run/containerd/s/ccad0f6d44bf3a387d470503048b0ffb933a02ac724bd1ec03dcef99142503d2" protocol=ttrpc version=3 Jan 20 03:07:05.303231 systemd[1]: Started cri-containerd-2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d.scope - libcontainer container 2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d. Jan 20 03:07:05.371165 containerd[1558]: time="2026-01-20T03:07:05.371107628Z" level=info msg="StartContainer for \"2df364b9f6b3fe42e269e455fb4fd83b05698e7a5ad972a6e69711aa4988364d\" returns successfully" Jan 20 03:07:05.513176 kubelet[2701]: E0120 03:07:05.512403 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:05.629381 kubelet[2701]: E0120 03:07:05.629270 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:05.686316 kubelet[2701]: E0120 03:07:05.686231 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.686316 kubelet[2701]: W0120 03:07:05.686271 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.686316 kubelet[2701]: E0120 03:07:05.686301 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.686671 kubelet[2701]: E0120 03:07:05.686643 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.686671 kubelet[2701]: W0120 03:07:05.686662 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.686823 kubelet[2701]: E0120 03:07:05.686810 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.687231 kubelet[2701]: E0120 03:07:05.687195 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.687231 kubelet[2701]: W0120 03:07:05.687223 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.687339 kubelet[2701]: E0120 03:07:05.687238 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.687644 kubelet[2701]: E0120 03:07:05.687609 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.687644 kubelet[2701]: W0120 03:07:05.687631 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.687644 kubelet[2701]: E0120 03:07:05.687642 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.688083 kubelet[2701]: E0120 03:07:05.688053 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.688083 kubelet[2701]: W0120 03:07:05.688077 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.688176 kubelet[2701]: E0120 03:07:05.688089 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.688412 kubelet[2701]: E0120 03:07:05.688375 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.688412 kubelet[2701]: W0120 03:07:05.688400 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.688509 kubelet[2701]: E0120 03:07:05.688414 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.688768 kubelet[2701]: E0120 03:07:05.688695 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.688768 kubelet[2701]: W0120 03:07:05.688742 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.688768 kubelet[2701]: E0120 03:07:05.688755 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.689164 kubelet[2701]: E0120 03:07:05.689129 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.689164 kubelet[2701]: W0120 03:07:05.689154 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.689258 kubelet[2701]: E0120 03:07:05.689168 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.689504 kubelet[2701]: E0120 03:07:05.689470 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.689504 kubelet[2701]: W0120 03:07:05.689493 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.689504 kubelet[2701]: E0120 03:07:05.689503 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.689991 kubelet[2701]: E0120 03:07:05.689850 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.689991 kubelet[2701]: W0120 03:07:05.689958 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.689991 kubelet[2701]: E0120 03:07:05.689974 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.690317 kubelet[2701]: E0120 03:07:05.690269 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.690317 kubelet[2701]: W0120 03:07:05.690295 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.690317 kubelet[2701]: E0120 03:07:05.690305 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.690614 kubelet[2701]: E0120 03:07:05.690581 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.690614 kubelet[2701]: W0120 03:07:05.690602 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.690614 kubelet[2701]: E0120 03:07:05.690612 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.691023 kubelet[2701]: E0120 03:07:05.690993 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.691023 kubelet[2701]: W0120 03:07:05.691014 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.691023 kubelet[2701]: E0120 03:07:05.691024 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.691332 kubelet[2701]: E0120 03:07:05.691295 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.691332 kubelet[2701]: W0120 03:07:05.691316 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.691332 kubelet[2701]: E0120 03:07:05.691326 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.691620 kubelet[2701]: E0120 03:07:05.691593 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.691620 kubelet[2701]: W0120 03:07:05.691613 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.691691 kubelet[2701]: E0120 03:07:05.691623 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.707293 kubelet[2701]: E0120 03:07:05.707205 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.707293 kubelet[2701]: W0120 03:07:05.707230 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.707293 kubelet[2701]: E0120 03:07:05.707243 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.707527 kubelet[2701]: E0120 03:07:05.707469 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.707527 kubelet[2701]: W0120 03:07:05.707477 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.707527 kubelet[2701]: E0120 03:07:05.707485 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.707940 kubelet[2701]: E0120 03:07:05.707783 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.707940 kubelet[2701]: W0120 03:07:05.707799 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.707940 kubelet[2701]: E0120 03:07:05.707812 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.708271 kubelet[2701]: E0120 03:07:05.708227 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.708271 kubelet[2701]: W0120 03:07:05.708258 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.708271 kubelet[2701]: E0120 03:07:05.708272 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.708657 kubelet[2701]: E0120 03:07:05.708591 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.708657 kubelet[2701]: W0120 03:07:05.708608 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.708657 kubelet[2701]: E0120 03:07:05.708623 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.709264 kubelet[2701]: E0120 03:07:05.709223 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.709368 kubelet[2701]: W0120 03:07:05.709294 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.709368 kubelet[2701]: E0120 03:07:05.709305 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.709692 kubelet[2701]: E0120 03:07:05.709666 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.709692 kubelet[2701]: W0120 03:07:05.709684 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.709692 kubelet[2701]: E0120 03:07:05.709693 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.710092 kubelet[2701]: E0120 03:07:05.710071 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.710092 kubelet[2701]: W0120 03:07:05.710083 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.710092 kubelet[2701]: E0120 03:07:05.710091 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.710949 kubelet[2701]: E0120 03:07:05.710649 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.710949 kubelet[2701]: W0120 03:07:05.710744 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.710949 kubelet[2701]: E0120 03:07:05.710759 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.711257 kubelet[2701]: E0120 03:07:05.711223 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.711257 kubelet[2701]: W0120 03:07:05.711248 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.711257 kubelet[2701]: E0120 03:07:05.711260 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.711693 kubelet[2701]: E0120 03:07:05.711653 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.711693 kubelet[2701]: W0120 03:07:05.711672 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.711693 kubelet[2701]: E0120 03:07:05.711680 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.712178 kubelet[2701]: E0120 03:07:05.712151 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.712178 kubelet[2701]: W0120 03:07:05.712169 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.712178 kubelet[2701]: E0120 03:07:05.712178 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.712543 kubelet[2701]: E0120 03:07:05.712448 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.712543 kubelet[2701]: W0120 03:07:05.712471 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.712543 kubelet[2701]: E0120 03:07:05.712479 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.713144 kubelet[2701]: E0120 03:07:05.713126 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.713237 kubelet[2701]: W0120 03:07:05.713205 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.713288 kubelet[2701]: E0120 03:07:05.713241 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.713986 kubelet[2701]: E0120 03:07:05.713961 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.714020 kubelet[2701]: W0120 03:07:05.713987 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.714020 kubelet[2701]: E0120 03:07:05.714000 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.714354 kubelet[2701]: E0120 03:07:05.714323 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.714354 kubelet[2701]: W0120 03:07:05.714334 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.714354 kubelet[2701]: E0120 03:07:05.714346 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.714637 kubelet[2701]: E0120 03:07:05.714610 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.714637 kubelet[2701]: W0120 03:07:05.714631 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.714690 kubelet[2701]: E0120 03:07:05.714646 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:05.715256 kubelet[2701]: E0120 03:07:05.715230 2701 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:07:05.715299 kubelet[2701]: W0120 03:07:05.715258 2701 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:07:05.715299 kubelet[2701]: E0120 03:07:05.715274 2701 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:07:06.179386 containerd[1558]: time="2026-01-20T03:07:06.179276114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:06.180410 containerd[1558]: time="2026-01-20T03:07:06.180304462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 03:07:06.181832 containerd[1558]: time="2026-01-20T03:07:06.181720121Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:06.184280 containerd[1558]: time="2026-01-20T03:07:06.184188068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:06.184673 containerd[1558]: time="2026-01-20T03:07:06.184606103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 954.803799ms" Jan 20 03:07:06.184673 containerd[1558]: time="2026-01-20T03:07:06.184644095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 03:07:06.189717 containerd[1558]: time="2026-01-20T03:07:06.189551560Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 03:07:06.201130 containerd[1558]: time="2026-01-20T03:07:06.201060586Z" level=info msg="Container 77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:06.212784 containerd[1558]: time="2026-01-20T03:07:06.212594095Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d\"" Jan 20 03:07:06.213611 containerd[1558]: time="2026-01-20T03:07:06.213531199Z" level=info msg="StartContainer for \"77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d\"" Jan 20 03:07:06.216978 containerd[1558]: time="2026-01-20T03:07:06.216504049Z" level=info msg="connecting to shim 77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d" address="unix:///run/containerd/s/4436940a750d87026a9791a990a81d157280f01c199e8baba15ec99454a3fb16" protocol=ttrpc version=3 Jan 20 03:07:06.245169 systemd[1]: Started cri-containerd-77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d.scope - libcontainer container 77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d. Jan 20 03:07:06.346303 containerd[1558]: time="2026-01-20T03:07:06.346210606Z" level=info msg="StartContainer for \"77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d\" returns successfully" Jan 20 03:07:06.361210 systemd[1]: cri-containerd-77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d.scope: Deactivated successfully. Jan 20 03:07:06.364502 containerd[1558]: time="2026-01-20T03:07:06.364463377Z" level=info msg="received container exit event container_id:\"77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d\" id:\"77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d\" pid:3466 exited_at:{seconds:1768878426 nanos:364100463}" Jan 20 03:07:06.398345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ed48f9382fe2990e83ac0194c3b4dca2934d3534a3e74f4c4d37e4b407936d-rootfs.mount: Deactivated successfully. Jan 20 03:07:06.630987 kubelet[2701]: I0120 03:07:06.630917 2701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:07:06.631668 kubelet[2701]: E0120 03:07:06.631469 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:06.632680 kubelet[2701]: E0120 03:07:06.632199 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:06.633187 containerd[1558]: time="2026-01-20T03:07:06.633144329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 03:07:06.649869 kubelet[2701]: I0120 03:07:06.649791 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-659d88cc6b-xtpjf" podStartSLOduration=2.248493932 podStartE2EDuration="3.649018542s" podCreationTimestamp="2026-01-20 03:07:03 +0000 UTC" firstStartedPulling="2026-01-20 03:07:03.829017696 +0000 UTC m=+23.459810239" lastFinishedPulling="2026-01-20 03:07:05.229542316 +0000 UTC m=+24.860334849" observedRunningTime="2026-01-20 03:07:05.642298325 +0000 UTC m=+25.273090858" watchObservedRunningTime="2026-01-20 03:07:06.649018542 +0000 UTC m=+26.279811075" Jan 20 03:07:07.510233 kubelet[2701]: E0120 03:07:07.510128 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:08.928932 containerd[1558]: time="2026-01-20T03:07:08.928691168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:08.932644 containerd[1558]: time="2026-01-20T03:07:08.932332462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 03:07:08.935401 containerd[1558]: time="2026-01-20T03:07:08.935292184Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:08.939107 containerd[1558]: time="2026-01-20T03:07:08.938981682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:08.939965 containerd[1558]: time="2026-01-20T03:07:08.939841376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.306647785s" Jan 20 03:07:08.939965 containerd[1558]: time="2026-01-20T03:07:08.939942967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 03:07:08.952905 containerd[1558]: time="2026-01-20T03:07:08.952792364Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 03:07:08.978983 containerd[1558]: time="2026-01-20T03:07:08.978475959Z" level=info msg="Container a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:08.983518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666272744.mount: Deactivated successfully. Jan 20 03:07:09.004571 containerd[1558]: time="2026-01-20T03:07:09.004453012Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a\"" Jan 20 03:07:09.007190 containerd[1558]: time="2026-01-20T03:07:09.006991613Z" level=info msg="StartContainer for \"a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a\"" Jan 20 03:07:09.012675 containerd[1558]: time="2026-01-20T03:07:09.012606058Z" level=info msg="connecting to shim a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a" address="unix:///run/containerd/s/4436940a750d87026a9791a990a81d157280f01c199e8baba15ec99454a3fb16" protocol=ttrpc version=3 Jan 20 03:07:09.044203 systemd[1]: Started cri-containerd-a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a.scope - libcontainer container a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a. Jan 20 03:07:09.192369 containerd[1558]: time="2026-01-20T03:07:09.192166968Z" level=info msg="StartContainer for \"a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a\" returns successfully" Jan 20 03:07:09.510588 kubelet[2701]: E0120 03:07:09.510225 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:09.651574 kubelet[2701]: E0120 03:07:09.651540 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:09.990613 systemd[1]: cri-containerd-a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a.scope: Deactivated successfully. Jan 20 03:07:09.991183 systemd[1]: cri-containerd-a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a.scope: Consumed 776ms CPU time, 178.7M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 20 03:07:09.994360 containerd[1558]: time="2026-01-20T03:07:09.994311066Z" level=info msg="received container exit event container_id:\"a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a\" id:\"a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a\" pid:3525 exited_at:{seconds:1768878429 nanos:993728076}" Jan 20 03:07:10.004387 kubelet[2701]: I0120 03:07:10.004352 2701 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 03:07:10.045511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a770ada491a3df7dfaa361bb1f44cc10cd2eeff2c1d2d651a231f72bf3513e3a-rootfs.mount: Deactivated successfully. Jan 20 03:07:10.095319 systemd[1]: Created slice kubepods-burstable-pod44bc219a_0768_4e39_9391_3f5a2822a096.slice - libcontainer container kubepods-burstable-pod44bc219a_0768_4e39_9391_3f5a2822a096.slice. Jan 20 03:07:10.114361 systemd[1]: Created slice kubepods-besteffort-poda91d0df2_2cb2_4cc3_b13c_61dfadc11b46.slice - libcontainer container kubepods-besteffort-poda91d0df2_2cb2_4cc3_b13c_61dfadc11b46.slice. Jan 20 03:07:10.128789 systemd[1]: Created slice kubepods-besteffort-podd400130d_fd02_4b87_8160_4ba74bd8b376.slice - libcontainer container kubepods-besteffort-podd400130d_fd02_4b87_8160_4ba74bd8b376.slice. Jan 20 03:07:10.145034 systemd[1]: Created slice kubepods-besteffort-pod25480ded_99d4_43f9_a73a_0b4e4143afb7.slice - libcontainer container kubepods-besteffort-pod25480ded_99d4_43f9_a73a_0b4e4143afb7.slice. Jan 20 03:07:10.148661 kubelet[2701]: I0120 03:07:10.147503 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldd2w\" (UniqueName: \"kubernetes.io/projected/68dce8b3-c3fe-40f2-a705-b41b9b2da4a7-kube-api-access-ldd2w\") pod \"calico-apiserver-6c8d49f8f6-869f9\" (UID: \"68dce8b3-c3fe-40f2-a705-b41b9b2da4a7\") " pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" Jan 20 03:07:10.148661 kubelet[2701]: I0120 03:07:10.147553 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkft\" (UniqueName: \"kubernetes.io/projected/a91d0df2-2cb2-4cc3-b13c-61dfadc11b46-kube-api-access-6qkft\") pod \"calico-kube-controllers-8485fc5f84-9b6gt\" (UID: \"a91d0df2-2cb2-4cc3-b13c-61dfadc11b46\") " pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" Jan 20 03:07:10.148661 kubelet[2701]: I0120 03:07:10.147589 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d400130d-fd02-4b87-8160-4ba74bd8b376-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8qts6\" (UID: \"d400130d-fd02-4b87-8160-4ba74bd8b376\") " pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.148661 kubelet[2701]: I0120 03:07:10.147616 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68dce8b3-c3fe-40f2-a705-b41b9b2da4a7-calico-apiserver-certs\") pod \"calico-apiserver-6c8d49f8f6-869f9\" (UID: \"68dce8b3-c3fe-40f2-a705-b41b9b2da4a7\") " pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" Jan 20 03:07:10.148661 kubelet[2701]: I0120 03:07:10.147639 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400130d-fd02-4b87-8160-4ba74bd8b376-config\") pod \"goldmane-7c778bb748-8qts6\" (UID: \"d400130d-fd02-4b87-8160-4ba74bd8b376\") " pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.149087 kubelet[2701]: I0120 03:07:10.147662 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25480ded-99d4-43f9-a73a-0b4e4143afb7-calico-apiserver-certs\") pod \"calico-apiserver-6d65d8c755-qpx77\" (UID: \"25480ded-99d4-43f9-a73a-0b4e4143afb7\") " pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" Jan 20 03:07:10.149087 kubelet[2701]: I0120 03:07:10.147691 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhgd\" (UniqueName: \"kubernetes.io/projected/25480ded-99d4-43f9-a73a-0b4e4143afb7-kube-api-access-twhgd\") pod \"calico-apiserver-6d65d8c755-qpx77\" (UID: \"25480ded-99d4-43f9-a73a-0b4e4143afb7\") " pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" Jan 20 03:07:10.149087 kubelet[2701]: I0120 03:07:10.147715 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnzg\" (UniqueName: \"kubernetes.io/projected/894676b0-e620-4fb7-9c25-410e3b409527-kube-api-access-nxnzg\") pod \"whisker-b6db67984-xkqlq\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " pod="calico-system/whisker-b6db67984-xkqlq" Jan 20 03:07:10.149087 kubelet[2701]: I0120 03:07:10.147739 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqc42\" (UniqueName: \"kubernetes.io/projected/44bc219a-0768-4e39-9391-3f5a2822a096-kube-api-access-dqc42\") pod \"coredns-66bc5c9577-mb2dm\" (UID: \"44bc219a-0768-4e39-9391-3f5a2822a096\") " pod="kube-system/coredns-66bc5c9577-mb2dm" Jan 20 03:07:10.149087 kubelet[2701]: I0120 03:07:10.147763 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d400130d-fd02-4b87-8160-4ba74bd8b376-goldmane-key-pair\") pod \"goldmane-7c778bb748-8qts6\" (UID: \"d400130d-fd02-4b87-8160-4ba74bd8b376\") " pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.149259 kubelet[2701]: I0120 03:07:10.147803 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hnn\" (UniqueName: \"kubernetes.io/projected/d400130d-fd02-4b87-8160-4ba74bd8b376-kube-api-access-w8hnn\") pod \"goldmane-7c778bb748-8qts6\" (UID: \"d400130d-fd02-4b87-8160-4ba74bd8b376\") " pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.149259 kubelet[2701]: I0120 03:07:10.147996 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2q84\" (UniqueName: \"kubernetes.io/projected/4a4f82c3-d2e5-4cf9-8df0-d5821866627e-kube-api-access-v2q84\") pod \"coredns-66bc5c9577-mrxcm\" (UID: \"4a4f82c3-d2e5-4cf9-8df0-d5821866627e\") " pod="kube-system/coredns-66bc5c9577-mrxcm" Jan 20 03:07:10.149259 kubelet[2701]: I0120 03:07:10.148046 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a4f82c3-d2e5-4cf9-8df0-d5821866627e-config-volume\") pod \"coredns-66bc5c9577-mrxcm\" (UID: \"4a4f82c3-d2e5-4cf9-8df0-d5821866627e\") " pod="kube-system/coredns-66bc5c9577-mrxcm" Jan 20 03:07:10.149259 kubelet[2701]: I0120 03:07:10.148091 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/894676b0-e620-4fb7-9c25-410e3b409527-whisker-backend-key-pair\") pod \"whisker-b6db67984-xkqlq\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " pod="calico-system/whisker-b6db67984-xkqlq" Jan 20 03:07:10.149259 kubelet[2701]: I0120 03:07:10.148120 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894676b0-e620-4fb7-9c25-410e3b409527-whisker-ca-bundle\") pod \"whisker-b6db67984-xkqlq\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " pod="calico-system/whisker-b6db67984-xkqlq" Jan 20 03:07:10.149443 kubelet[2701]: I0120 03:07:10.148143 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44bc219a-0768-4e39-9391-3f5a2822a096-config-volume\") pod \"coredns-66bc5c9577-mb2dm\" (UID: \"44bc219a-0768-4e39-9391-3f5a2822a096\") " pod="kube-system/coredns-66bc5c9577-mb2dm" Jan 20 03:07:10.149443 kubelet[2701]: I0120 03:07:10.148175 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9eb8a6d4-b655-41d5-bb6a-48e492c0056f-calico-apiserver-certs\") pod \"calico-apiserver-6c8d49f8f6-9f5w7\" (UID: \"9eb8a6d4-b655-41d5-bb6a-48e492c0056f\") " pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" Jan 20 03:07:10.149443 kubelet[2701]: I0120 03:07:10.148198 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhht\" (UniqueName: \"kubernetes.io/projected/9eb8a6d4-b655-41d5-bb6a-48e492c0056f-kube-api-access-tdhht\") pod \"calico-apiserver-6c8d49f8f6-9f5w7\" (UID: \"9eb8a6d4-b655-41d5-bb6a-48e492c0056f\") " pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" Jan 20 03:07:10.149443 kubelet[2701]: I0120 03:07:10.148229 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91d0df2-2cb2-4cc3-b13c-61dfadc11b46-tigera-ca-bundle\") pod \"calico-kube-controllers-8485fc5f84-9b6gt\" (UID: \"a91d0df2-2cb2-4cc3-b13c-61dfadc11b46\") " pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" Jan 20 03:07:10.157032 systemd[1]: Created slice kubepods-besteffort-pod68dce8b3_c3fe_40f2_a705_b41b9b2da4a7.slice - libcontainer container kubepods-besteffort-pod68dce8b3_c3fe_40f2_a705_b41b9b2da4a7.slice. Jan 20 03:07:10.169613 systemd[1]: Created slice kubepods-besteffort-pod894676b0_e620_4fb7_9c25_410e3b409527.slice - libcontainer container kubepods-besteffort-pod894676b0_e620_4fb7_9c25_410e3b409527.slice. Jan 20 03:07:10.184136 systemd[1]: Created slice kubepods-burstable-pod4a4f82c3_d2e5_4cf9_8df0_d5821866627e.slice - libcontainer container kubepods-burstable-pod4a4f82c3_d2e5_4cf9_8df0_d5821866627e.slice. Jan 20 03:07:10.191180 systemd[1]: Created slice kubepods-besteffort-pod9eb8a6d4_b655_41d5_bb6a_48e492c0056f.slice - libcontainer container kubepods-besteffort-pod9eb8a6d4_b655_41d5_bb6a_48e492c0056f.slice. Jan 20 03:07:10.414996 kubelet[2701]: E0120 03:07:10.414710 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:10.417692 containerd[1558]: time="2026-01-20T03:07:10.417246126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mb2dm,Uid:44bc219a-0768-4e39-9391-3f5a2822a096,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:10.434597 containerd[1558]: time="2026-01-20T03:07:10.434497777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fc5f84-9b6gt,Uid:a91d0df2-2cb2-4cc3-b13c-61dfadc11b46,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:10.447432 containerd[1558]: time="2026-01-20T03:07:10.447343330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8qts6,Uid:d400130d-fd02-4b87-8160-4ba74bd8b376,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:10.464991 containerd[1558]: time="2026-01-20T03:07:10.464404717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d65d8c755-qpx77,Uid:25480ded-99d4-43f9-a73a-0b4e4143afb7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:10.471816 containerd[1558]: time="2026-01-20T03:07:10.471731099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-869f9,Uid:68dce8b3-c3fe-40f2-a705-b41b9b2da4a7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:10.483942 containerd[1558]: time="2026-01-20T03:07:10.483815841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6db67984-xkqlq,Uid:894676b0-e620-4fb7-9c25-410e3b409527,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:10.494958 kubelet[2701]: E0120 03:07:10.494913 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:10.500007 containerd[1558]: time="2026-01-20T03:07:10.498723661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrxcm,Uid:4a4f82c3-d2e5-4cf9-8df0-d5821866627e,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:10.501181 containerd[1558]: time="2026-01-20T03:07:10.500810009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-9f5w7,Uid:9eb8a6d4-b655-41d5-bb6a-48e492c0056f,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:10.667951 containerd[1558]: time="2026-01-20T03:07:10.667600340Z" level=error msg="Failed to destroy network for sandbox \"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.671378 kubelet[2701]: E0120 03:07:10.671325 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:10.687118 containerd[1558]: time="2026-01-20T03:07:10.687075645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 03:07:10.723934 containerd[1558]: time="2026-01-20T03:07:10.723698516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fc5f84-9b6gt,Uid:a91d0df2-2cb2-4cc3-b13c-61dfadc11b46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.727808 containerd[1558]: time="2026-01-20T03:07:10.727746158Z" level=error msg="Failed to destroy network for sandbox \"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.734935 kubelet[2701]: E0120 03:07:10.732775 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.735225 kubelet[2701]: E0120 03:07:10.734693 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" Jan 20 03:07:10.737224 kubelet[2701]: E0120 03:07:10.736442 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.737224 kubelet[2701]: E0120 03:07:10.736490 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" Jan 20 03:07:10.737224 kubelet[2701]: E0120 03:07:10.736513 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" Jan 20 03:07:10.737342 containerd[1558]: time="2026-01-20T03:07:10.735753090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-9f5w7,Uid:9eb8a6d4-b655-41d5-bb6a-48e492c0056f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.737474 kubelet[2701]: E0120 03:07:10.736615 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8d49f8f6-9f5w7_calico-apiserver(9eb8a6d4-b655-41d5-bb6a-48e492c0056f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8d49f8f6-9f5w7_calico-apiserver(9eb8a6d4-b655-41d5-bb6a-48e492c0056f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"699244895f33adb9ee43200e02829f7e5393532bb1848af65161be6850de8c77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:10.738957 kubelet[2701]: E0120 03:07:10.735150 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" Jan 20 03:07:10.738957 kubelet[2701]: E0120 03:07:10.738368 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8485fc5f84-9b6gt_calico-system(a91d0df2-2cb2-4cc3-b13c-61dfadc11b46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8485fc5f84-9b6gt_calico-system(a91d0df2-2cb2-4cc3-b13c-61dfadc11b46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a15e34063cecffc076f67a75c915738d0d2d6f05561ca7f40edff453b3cf5d00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:10.741589 containerd[1558]: time="2026-01-20T03:07:10.740834130Z" level=error msg="Failed to destroy network for sandbox \"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.744537 containerd[1558]: time="2026-01-20T03:07:10.740834183Z" level=error msg="Failed to destroy network for sandbox \"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.746025 containerd[1558]: time="2026-01-20T03:07:10.740973365Z" level=error msg="Failed to destroy network for sandbox \"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.753554 containerd[1558]: time="2026-01-20T03:07:10.753462284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6db67984-xkqlq,Uid:894676b0-e620-4fb7-9c25-410e3b409527,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.755238 containerd[1558]: time="2026-01-20T03:07:10.754598100Z" level=error msg="Failed to destroy network for sandbox \"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.759265 kubelet[2701]: E0120 03:07:10.758650 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.759265 kubelet[2701]: E0120 03:07:10.758794 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6db67984-xkqlq" Jan 20 03:07:10.759265 kubelet[2701]: E0120 03:07:10.758825 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b6db67984-xkqlq" Jan 20 03:07:10.760275 containerd[1558]: time="2026-01-20T03:07:10.758786690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8qts6,Uid:d400130d-fd02-4b87-8160-4ba74bd8b376,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.760390 kubelet[2701]: E0120 03:07:10.758980 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b6db67984-xkqlq_calico-system(894676b0-e620-4fb7-9c25-410e3b409527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b6db67984-xkqlq_calico-system(894676b0-e620-4fb7-9c25-410e3b409527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc248007abb658e54027e61f54ec883083fca101f39f24fe9f30125d513bf3f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b6db67984-xkqlq" podUID="894676b0-e620-4fb7-9c25-410e3b409527" Jan 20 03:07:10.760390 kubelet[2701]: E0120 03:07:10.759141 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.760390 kubelet[2701]: E0120 03:07:10.759176 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.761544 kubelet[2701]: E0120 03:07:10.759199 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8qts6" Jan 20 03:07:10.761544 kubelet[2701]: E0120 03:07:10.759255 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8qts6_calico-system(d400130d-fd02-4b87-8160-4ba74bd8b376)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8qts6_calico-system(d400130d-fd02-4b87-8160-4ba74bd8b376)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac02482963263412f687d5ac64f72cb2c4d9b573e9f498076874650de39778a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:10.767533 containerd[1558]: time="2026-01-20T03:07:10.767333346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-869f9,Uid:68dce8b3-c3fe-40f2-a705-b41b9b2da4a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.769725 kubelet[2701]: E0120 03:07:10.768816 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.770448 kubelet[2701]: E0120 03:07:10.770324 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" Jan 20 03:07:10.770571 kubelet[2701]: E0120 03:07:10.770543 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" Jan 20 03:07:10.771083 kubelet[2701]: E0120 03:07:10.770939 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8d49f8f6-869f9_calico-apiserver(68dce8b3-c3fe-40f2-a705-b41b9b2da4a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8d49f8f6-869f9_calico-apiserver(68dce8b3-c3fe-40f2-a705-b41b9b2da4a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"749a485d904651353b581cb672063f4a9ef771d73f50512d84e7460333fa6d30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:10.771468 containerd[1558]: time="2026-01-20T03:07:10.771384126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mb2dm,Uid:44bc219a-0768-4e39-9391-3f5a2822a096,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.772226 kubelet[2701]: E0120 03:07:10.772193 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.772801 kubelet[2701]: E0120 03:07:10.772631 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mb2dm" Jan 20 03:07:10.772801 kubelet[2701]: E0120 03:07:10.772731 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mb2dm" Jan 20 03:07:10.774078 kubelet[2701]: E0120 03:07:10.773484 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mb2dm_kube-system(44bc219a-0768-4e39-9391-3f5a2822a096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mb2dm_kube-system(44bc219a-0768-4e39-9391-3f5a2822a096)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06997dcae798a4f73911c3d6b0628365d6ef3ee70ef618ec04149d16750e141d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mb2dm" podUID="44bc219a-0768-4e39-9391-3f5a2822a096" Jan 20 03:07:10.779437 containerd[1558]: time="2026-01-20T03:07:10.779269815Z" level=error msg="Failed to destroy network for sandbox \"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.786541 containerd[1558]: time="2026-01-20T03:07:10.785829945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrxcm,Uid:4a4f82c3-d2e5-4cf9-8df0-d5821866627e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.786811 kubelet[2701]: E0120 03:07:10.786748 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.787114 kubelet[2701]: E0120 03:07:10.786828 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mrxcm" Jan 20 03:07:10.787372 kubelet[2701]: E0120 03:07:10.787121 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mrxcm" Jan 20 03:07:10.787372 kubelet[2701]: E0120 03:07:10.787270 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mrxcm_kube-system(4a4f82c3-d2e5-4cf9-8df0-d5821866627e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mrxcm_kube-system(4a4f82c3-d2e5-4cf9-8df0-d5821866627e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb98d1abd408437948c9033fded6e27cf2edfb0e733ce47f4c8a15e35ce2a1ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mrxcm" podUID="4a4f82c3-d2e5-4cf9-8df0-d5821866627e" Jan 20 03:07:10.806161 containerd[1558]: time="2026-01-20T03:07:10.806067077Z" level=error msg="Failed to destroy network for sandbox \"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.808709 containerd[1558]: time="2026-01-20T03:07:10.808566788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d65d8c755-qpx77,Uid:25480ded-99d4-43f9-a73a-0b4e4143afb7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.809161 kubelet[2701]: E0120 03:07:10.809088 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:10.809249 kubelet[2701]: E0120 03:07:10.809164 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" Jan 20 03:07:10.809249 kubelet[2701]: E0120 03:07:10.809191 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" Jan 20 03:07:10.809328 kubelet[2701]: E0120 03:07:10.809297 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d65d8c755-qpx77_calico-apiserver(25480ded-99d4-43f9-a73a-0b4e4143afb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d65d8c755-qpx77_calico-apiserver(25480ded-99d4-43f9-a73a-0b4e4143afb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6ddbb7f1e38bd6a42243928f63768e7ccd31fe7f52d7f8dab1e953d8666eb11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:11.219939 kubelet[2701]: I0120 03:07:11.219797 2701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:07:11.220656 kubelet[2701]: E0120 03:07:11.220541 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:11.518587 systemd[1]: Created slice kubepods-besteffort-podbb663801_c52b_48d5_9ddb_4fcd0f5aab67.slice - libcontainer container kubepods-besteffort-podbb663801_c52b_48d5_9ddb_4fcd0f5aab67.slice. Jan 20 03:07:11.527519 containerd[1558]: time="2026-01-20T03:07:11.527480488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgb8x,Uid:bb663801-c52b-48d5-9ddb-4fcd0f5aab67,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:11.597310 containerd[1558]: time="2026-01-20T03:07:11.597247291Z" level=error msg="Failed to destroy network for sandbox \"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:11.601508 systemd[1]: run-netns-cni\x2d3162f8dc\x2da9c8\x2d8180\x2dd23b\x2d13000e1f91ea.mount: Deactivated successfully. Jan 20 03:07:11.604529 containerd[1558]: time="2026-01-20T03:07:11.604378773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgb8x,Uid:bb663801-c52b-48d5-9ddb-4fcd0f5aab67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:11.605179 kubelet[2701]: E0120 03:07:11.605140 2701 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:07:11.609942 kubelet[2701]: E0120 03:07:11.609572 2701 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:11.609942 kubelet[2701]: E0120 03:07:11.609630 2701 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgb8x" Jan 20 03:07:11.609942 kubelet[2701]: E0120 03:07:11.609703 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b9914ee7241db403d1d9894afac80a813134631215f62b6c8fec6e2320c1b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:11.682771 kubelet[2701]: E0120 03:07:11.682657 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:16.926563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254728487.mount: Deactivated successfully. Jan 20 03:07:17.167867 containerd[1558]: time="2026-01-20T03:07:17.167741372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:17.169017 containerd[1558]: time="2026-01-20T03:07:17.168949561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 03:07:17.170555 containerd[1558]: time="2026-01-20T03:07:17.170482883Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:17.173425 containerd[1558]: time="2026-01-20T03:07:17.173373243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:17.174253 containerd[1558]: time="2026-01-20T03:07:17.174178433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.486882571s" Jan 20 03:07:17.174253 containerd[1558]: time="2026-01-20T03:07:17.174240159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 03:07:17.207070 containerd[1558]: time="2026-01-20T03:07:17.206952033Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 03:07:17.218195 containerd[1558]: time="2026-01-20T03:07:17.218134856Z" level=info msg="Container fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:17.236650 containerd[1558]: time="2026-01-20T03:07:17.236577014Z" level=info msg="CreateContainer within sandbox \"2d4d969fbe12fb371041880c067d058c402d0eceb2bc47210d6fcd8156e24496\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8\"" Jan 20 03:07:17.237348 containerd[1558]: time="2026-01-20T03:07:17.237309329Z" level=info msg="StartContainer for \"fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8\"" Jan 20 03:07:17.239812 containerd[1558]: time="2026-01-20T03:07:17.239756836Z" level=info msg="connecting to shim fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8" address="unix:///run/containerd/s/4436940a750d87026a9791a990a81d157280f01c199e8baba15ec99454a3fb16" protocol=ttrpc version=3 Jan 20 03:07:17.266146 systemd[1]: Started cri-containerd-fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8.scope - libcontainer container fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8. Jan 20 03:07:17.360806 containerd[1558]: time="2026-01-20T03:07:17.360761530Z" level=info msg="StartContainer for \"fb3da5a8d25646dad57260a1a4395f98bae6c3a31032ff463ae6a63cb75615f8\" returns successfully" Jan 20 03:07:17.453145 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 03:07:17.454053 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 03:07:17.617498 kubelet[2701]: I0120 03:07:17.617313 2701 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxnzg\" (UniqueName: \"kubernetes.io/projected/894676b0-e620-4fb7-9c25-410e3b409527-kube-api-access-nxnzg\") pod \"894676b0-e620-4fb7-9c25-410e3b409527\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " Jan 20 03:07:17.617498 kubelet[2701]: I0120 03:07:17.617369 2701 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/894676b0-e620-4fb7-9c25-410e3b409527-whisker-backend-key-pair\") pod \"894676b0-e620-4fb7-9c25-410e3b409527\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " Jan 20 03:07:17.617498 kubelet[2701]: I0120 03:07:17.617387 2701 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894676b0-e620-4fb7-9c25-410e3b409527-whisker-ca-bundle\") pod \"894676b0-e620-4fb7-9c25-410e3b409527\" (UID: \"894676b0-e620-4fb7-9c25-410e3b409527\") " Jan 20 03:07:17.618224 kubelet[2701]: I0120 03:07:17.617786 2701 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/894676b0-e620-4fb7-9c25-410e3b409527-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "894676b0-e620-4fb7-9c25-410e3b409527" (UID: "894676b0-e620-4fb7-9c25-410e3b409527"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 03:07:17.622152 kubelet[2701]: I0120 03:07:17.622116 2701 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/894676b0-e620-4fb7-9c25-410e3b409527-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "894676b0-e620-4fb7-9c25-410e3b409527" (UID: "894676b0-e620-4fb7-9c25-410e3b409527"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 03:07:17.622437 kubelet[2701]: I0120 03:07:17.622369 2701 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894676b0-e620-4fb7-9c25-410e3b409527-kube-api-access-nxnzg" (OuterVolumeSpecName: "kube-api-access-nxnzg") pod "894676b0-e620-4fb7-9c25-410e3b409527" (UID: "894676b0-e620-4fb7-9c25-410e3b409527"). InnerVolumeSpecName "kube-api-access-nxnzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:07:17.708345 kubelet[2701]: E0120 03:07:17.708210 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:17.718656 kubelet[2701]: I0120 03:07:17.717984 2701 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nxnzg\" (UniqueName: \"kubernetes.io/projected/894676b0-e620-4fb7-9c25-410e3b409527-kube-api-access-nxnzg\") on node \"localhost\" DevicePath \"\"" Jan 20 03:07:17.718656 kubelet[2701]: I0120 03:07:17.718049 2701 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/894676b0-e620-4fb7-9c25-410e3b409527-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 03:07:17.718656 kubelet[2701]: I0120 03:07:17.718063 2701 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894676b0-e620-4fb7-9c25-410e3b409527-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 03:07:17.721475 systemd[1]: Removed slice kubepods-besteffort-pod894676b0_e620_4fb7_9c25_410e3b409527.slice - libcontainer container kubepods-besteffort-pod894676b0_e620_4fb7_9c25_410e3b409527.slice. Jan 20 03:07:17.735770 kubelet[2701]: I0120 03:07:17.733597 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lc75q" podStartSLOduration=1.570506714 podStartE2EDuration="14.733585364s" podCreationTimestamp="2026-01-20 03:07:03 +0000 UTC" firstStartedPulling="2026-01-20 03:07:04.012125466 +0000 UTC m=+23.642917999" lastFinishedPulling="2026-01-20 03:07:17.175204116 +0000 UTC m=+36.805996649" observedRunningTime="2026-01-20 03:07:17.732728725 +0000 UTC m=+37.363521258" watchObservedRunningTime="2026-01-20 03:07:17.733585364 +0000 UTC m=+37.364377897" Jan 20 03:07:17.820431 systemd[1]: Created slice kubepods-besteffort-pod8c9999c8_e94a_48c3_bb30_f3c2a906e952.slice - libcontainer container kubepods-besteffort-pod8c9999c8_e94a_48c3_bb30_f3c2a906e952.slice. Jan 20 03:07:17.919669 kubelet[2701]: I0120 03:07:17.919487 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c9999c8-e94a-48c3-bb30-f3c2a906e952-whisker-backend-key-pair\") pod \"whisker-6f5d484d94-62ssc\" (UID: \"8c9999c8-e94a-48c3-bb30-f3c2a906e952\") " pod="calico-system/whisker-6f5d484d94-62ssc" Jan 20 03:07:17.919669 kubelet[2701]: I0120 03:07:17.919538 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlhq\" (UniqueName: \"kubernetes.io/projected/8c9999c8-e94a-48c3-bb30-f3c2a906e952-kube-api-access-jdlhq\") pod \"whisker-6f5d484d94-62ssc\" (UID: \"8c9999c8-e94a-48c3-bb30-f3c2a906e952\") " pod="calico-system/whisker-6f5d484d94-62ssc" Jan 20 03:07:17.919669 kubelet[2701]: I0120 03:07:17.919556 2701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c9999c8-e94a-48c3-bb30-f3c2a906e952-whisker-ca-bundle\") pod \"whisker-6f5d484d94-62ssc\" (UID: \"8c9999c8-e94a-48c3-bb30-f3c2a906e952\") " pod="calico-system/whisker-6f5d484d94-62ssc" Jan 20 03:07:17.926363 systemd[1]: var-lib-kubelet-pods-894676b0\x2de620\x2d4fb7\x2d9c25\x2d410e3b409527-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnxnzg.mount: Deactivated successfully. Jan 20 03:07:17.926492 systemd[1]: var-lib-kubelet-pods-894676b0\x2de620\x2d4fb7\x2d9c25\x2d410e3b409527-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 03:07:18.130791 containerd[1558]: time="2026-01-20T03:07:18.130729388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5d484d94-62ssc,Uid:8c9999c8-e94a-48c3-bb30-f3c2a906e952,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:18.324454 systemd-networkd[1460]: cali7311ca8f4c1: Link UP Jan 20 03:07:18.324706 systemd-networkd[1460]: cali7311ca8f4c1: Gained carrier Jan 20 03:07:18.341439 containerd[1558]: 2026-01-20 03:07:18.157 [INFO][3976] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 03:07:18.341439 containerd[1558]: 2026-01-20 03:07:18.182 [INFO][3976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f5d484d94--62ssc-eth0 whisker-6f5d484d94- calico-system 8c9999c8-e94a-48c3-bb30-f3c2a906e952 934 0 2026-01-20 03:07:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f5d484d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f5d484d94-62ssc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7311ca8f4c1 [] [] }} ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-" Jan 20 03:07:18.341439 containerd[1558]: 2026-01-20 03:07:18.183 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.341439 containerd[1558]: 2026-01-20 03:07:18.272 [INFO][3993] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" HandleID="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Workload="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.273 [INFO][3993] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" HandleID="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Workload="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e75b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f5d484d94-62ssc", "timestamp":"2026-01-20 03:07:18.272597266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.273 [INFO][3993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.274 [INFO][3993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.274 [INFO][3993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.283 [INFO][3993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" host="localhost" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.291 [INFO][3993] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.296 [INFO][3993] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.298 [INFO][3993] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.301 [INFO][3993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:18.342293 containerd[1558]: 2026-01-20 03:07:18.301 [INFO][3993] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" host="localhost" Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.303 [INFO][3993] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1 Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.307 [INFO][3993] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" host="localhost" Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.311 [INFO][3993] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" host="localhost" Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.311 [INFO][3993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" host="localhost" Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.311 [INFO][3993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:18.342664 containerd[1558]: 2026-01-20 03:07:18.311 [INFO][3993] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" HandleID="k8s-pod-network.3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Workload="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.343070 containerd[1558]: 2026-01-20 03:07:18.315 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f5d484d94--62ssc-eth0", GenerateName:"whisker-6f5d484d94-", Namespace:"calico-system", SelfLink:"", UID:"8c9999c8-e94a-48c3-bb30-f3c2a906e952", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f5d484d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f5d484d94-62ssc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7311ca8f4c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:18.343070 containerd[1558]: 2026-01-20 03:07:18.315 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.343222 containerd[1558]: 2026-01-20 03:07:18.315 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7311ca8f4c1 ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.343222 containerd[1558]: 2026-01-20 03:07:18.325 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.343303 containerd[1558]: 2026-01-20 03:07:18.325 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f5d484d94--62ssc-eth0", GenerateName:"whisker-6f5d484d94-", Namespace:"calico-system", SelfLink:"", UID:"8c9999c8-e94a-48c3-bb30-f3c2a906e952", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f5d484d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1", Pod:"whisker-6f5d484d94-62ssc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7311ca8f4c1", MAC:"c2:e2:21:b3:dc:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:18.343420 containerd[1558]: 2026-01-20 03:07:18.337 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" Namespace="calico-system" Pod="whisker-6f5d484d94-62ssc" WorkloadEndpoint="localhost-k8s-whisker--6f5d484d94--62ssc-eth0" Jan 20 03:07:18.447253 containerd[1558]: time="2026-01-20T03:07:18.447191351Z" level=info msg="connecting to shim 3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1" address="unix:///run/containerd/s/4d8346c4c5eb14a87ce7174ab970e3c048a1ca9dcf51e6379424595aa47e3bfb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:18.484228 systemd[1]: Started cri-containerd-3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1.scope - libcontainer container 3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1. Jan 20 03:07:18.499301 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:18.513480 kubelet[2701]: I0120 03:07:18.513444 2701 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894676b0-e620-4fb7-9c25-410e3b409527" path="/var/lib/kubelet/pods/894676b0-e620-4fb7-9c25-410e3b409527/volumes" Jan 20 03:07:18.539300 containerd[1558]: time="2026-01-20T03:07:18.539196512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5d484d94-62ssc,Uid:8c9999c8-e94a-48c3-bb30-f3c2a906e952,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b02ed710a28b5cd6b58a7f46853b266ee738c85da70fcfce833640772c6faf1\"" Jan 20 03:07:18.541347 containerd[1558]: time="2026-01-20T03:07:18.541319780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:07:18.617702 containerd[1558]: time="2026-01-20T03:07:18.617478420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:18.619639 containerd[1558]: time="2026-01-20T03:07:18.619575902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:07:18.629558 containerd[1558]: time="2026-01-20T03:07:18.629435761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:07:18.629870 kubelet[2701]: E0120 03:07:18.629797 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:18.630442 kubelet[2701]: E0120 03:07:18.629862 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:18.630442 kubelet[2701]: E0120 03:07:18.630066 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:18.631535 containerd[1558]: time="2026-01-20T03:07:18.631496845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:07:18.688137 containerd[1558]: time="2026-01-20T03:07:18.687971044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:18.689914 containerd[1558]: time="2026-01-20T03:07:18.689821455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:07:18.690057 containerd[1558]: time="2026-01-20T03:07:18.689867079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:07:18.690304 kubelet[2701]: E0120 03:07:18.690227 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:18.690304 kubelet[2701]: E0120 03:07:18.690294 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:18.690416 kubelet[2701]: E0120 03:07:18.690382 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:18.690541 kubelet[2701]: E0120 03:07:18.690439 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:18.708612 kubelet[2701]: E0120 03:07:18.708417 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:18.709558 kubelet[2701]: E0120 03:07:18.709514 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:19.449986 systemd-networkd[1460]: vxlan.calico: Link UP Jan 20 03:07:19.450001 systemd-networkd[1460]: vxlan.calico: Gained carrier Jan 20 03:07:19.713447 kubelet[2701]: E0120 03:07:19.713245 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:20.089223 systemd-networkd[1460]: cali7311ca8f4c1: Gained IPv6LL Jan 20 03:07:21.177351 systemd-networkd[1460]: vxlan.calico: Gained IPv6LL Jan 20 03:07:22.545203 containerd[1558]: time="2026-01-20T03:07:22.545032723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8qts6,Uid:d400130d-fd02-4b87-8160-4ba74bd8b376,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:22.547432 containerd[1558]: time="2026-01-20T03:07:22.547404104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-869f9,Uid:68dce8b3-c3fe-40f2-a705-b41b9b2da4a7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:22.683817 systemd-networkd[1460]: cali9ca8c4a2512: Link UP Jan 20 03:07:22.684689 systemd-networkd[1460]: cali9ca8c4a2512: Gained carrier Jan 20 03:07:22.702019 containerd[1558]: 2026-01-20 03:07:22.594 [INFO][4285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--8qts6-eth0 goldmane-7c778bb748- calico-system d400130d-fd02-4b87-8160-4ba74bd8b376 857 0 2026-01-20 03:07:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-8qts6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9ca8c4a2512 [] [] }} ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-" Jan 20 03:07:22.702019 containerd[1558]: 2026-01-20 03:07:22.595 [INFO][4285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.702019 containerd[1558]: 2026-01-20 03:07:22.629 [INFO][4316] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" HandleID="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Workload="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.630 [INFO][4316] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" HandleID="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Workload="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-8qts6", "timestamp":"2026-01-20 03:07:22.629068664 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.630 [INFO][4316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.630 [INFO][4316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.630 [INFO][4316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.637 [INFO][4316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" host="localhost" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.645 [INFO][4316] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.652 [INFO][4316] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.654 [INFO][4316] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.658 [INFO][4316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:22.702359 containerd[1558]: 2026-01-20 03:07:22.658 [INFO][4316] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" host="localhost" Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.660 [INFO][4316] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.666 [INFO][4316] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" host="localhost" Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.674 [INFO][4316] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" host="localhost" Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.675 [INFO][4316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" host="localhost" Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.675 [INFO][4316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:22.702761 containerd[1558]: 2026-01-20 03:07:22.675 [INFO][4316] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" HandleID="k8s-pod-network.32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Workload="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.703041 containerd[1558]: 2026-01-20 03:07:22.678 [INFO][4285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8qts6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d400130d-fd02-4b87-8160-4ba74bd8b376", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-8qts6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9ca8c4a2512", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:22.703041 containerd[1558]: 2026-01-20 03:07:22.679 [INFO][4285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.703268 containerd[1558]: 2026-01-20 03:07:22.679 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ca8c4a2512 ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.703268 containerd[1558]: 2026-01-20 03:07:22.685 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.703352 containerd[1558]: 2026-01-20 03:07:22.685 [INFO][4285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8qts6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d400130d-fd02-4b87-8160-4ba74bd8b376", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef", Pod:"goldmane-7c778bb748-8qts6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9ca8c4a2512", MAC:"fa:a8:4e:90:c7:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:22.703457 containerd[1558]: 2026-01-20 03:07:22.696 [INFO][4285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" Namespace="calico-system" Pod="goldmane-7c778bb748-8qts6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8qts6-eth0" Jan 20 03:07:22.735989 containerd[1558]: time="2026-01-20T03:07:22.735275318Z" level=info msg="connecting to shim 32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef" address="unix:///run/containerd/s/0faa6366f524bbc3d52a67b79121dd7854f38add109a24d5f4512c2fc5c5a3c9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:22.770199 systemd[1]: Started cri-containerd-32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef.scope - libcontainer container 32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef. Jan 20 03:07:22.789690 systemd-networkd[1460]: calidf630306ea5: Link UP Jan 20 03:07:22.791319 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:22.791492 systemd-networkd[1460]: calidf630306ea5: Gained carrier Jan 20 03:07:22.818169 containerd[1558]: 2026-01-20 03:07:22.595 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0 calico-apiserver-6c8d49f8f6- calico-apiserver 68dce8b3-c3fe-40f2-a705-b41b9b2da4a7 855 0 2026-01-20 03:06:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8d49f8f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c8d49f8f6-869f9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf630306ea5 [] [] }} ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-" Jan 20 03:07:22.818169 containerd[1558]: 2026-01-20 03:07:22.596 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.818169 containerd[1558]: 2026-01-20 03:07:22.652 [INFO][4322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" HandleID="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.652 [INFO][4322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" HandleID="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c8d49f8f6-869f9", "timestamp":"2026-01-20 03:07:22.652007646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.652 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.675 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.676 [INFO][4322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.741 [INFO][4322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" host="localhost" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.747 [INFO][4322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.754 [INFO][4322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.758 [INFO][4322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.761 [INFO][4322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:22.818369 containerd[1558]: 2026-01-20 03:07:22.761 [INFO][4322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" host="localhost" Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.763 [INFO][4322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17 Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.771 [INFO][4322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" host="localhost" Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.779 [INFO][4322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" host="localhost" Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.779 [INFO][4322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" host="localhost" Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.779 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:22.819823 containerd[1558]: 2026-01-20 03:07:22.779 [INFO][4322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" HandleID="k8s-pod-network.2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.821030 containerd[1558]: 2026-01-20 03:07:22.784 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0", GenerateName:"calico-apiserver-6c8d49f8f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"68dce8b3-c3fe-40f2-a705-b41b9b2da4a7", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8d49f8f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c8d49f8f6-869f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf630306ea5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:22.821167 containerd[1558]: 2026-01-20 03:07:22.784 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.821167 containerd[1558]: 2026-01-20 03:07:22.784 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf630306ea5 ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.821167 containerd[1558]: 2026-01-20 03:07:22.792 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.822019 containerd[1558]: 2026-01-20 03:07:22.795 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0", GenerateName:"calico-apiserver-6c8d49f8f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"68dce8b3-c3fe-40f2-a705-b41b9b2da4a7", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8d49f8f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17", Pod:"calico-apiserver-6c8d49f8f6-869f9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf630306ea5", MAC:"3e:3e:87:d1:84:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:22.822153 containerd[1558]: 2026-01-20 03:07:22.810 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-869f9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--869f9-eth0" Jan 20 03:07:22.841558 containerd[1558]: time="2026-01-20T03:07:22.841489209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8qts6,Uid:d400130d-fd02-4b87-8160-4ba74bd8b376,Namespace:calico-system,Attempt:0,} returns sandbox id \"32403b3d67eaebc54b11cc64fb32b46d4e0137b553a490c77b8f614ffeecd8ef\"" Jan 20 03:07:22.844033 containerd[1558]: time="2026-01-20T03:07:22.844001343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:07:22.855794 containerd[1558]: time="2026-01-20T03:07:22.855729358Z" level=info msg="connecting to shim 2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17" address="unix:///run/containerd/s/de6af0da52e323e4cc096290e5a23c8ff869ba21f25b9b2e018bddc6f5f6551f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:22.890076 systemd[1]: Started cri-containerd-2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17.scope - libcontainer container 2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17. Jan 20 03:07:22.909236 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:22.911502 containerd[1558]: time="2026-01-20T03:07:22.911397420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:22.912825 containerd[1558]: time="2026-01-20T03:07:22.912699969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:07:22.912825 containerd[1558]: time="2026-01-20T03:07:22.912774590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:22.913638 kubelet[2701]: E0120 03:07:22.913458 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:07:22.915383 kubelet[2701]: E0120 03:07:22.914144 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:07:22.915383 kubelet[2701]: E0120 03:07:22.914272 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8qts6_calico-system(d400130d-fd02-4b87-8160-4ba74bd8b376): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:22.915383 kubelet[2701]: E0120 03:07:22.914303 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:22.957981 containerd[1558]: time="2026-01-20T03:07:22.957931155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-869f9,Uid:68dce8b3-c3fe-40f2-a705-b41b9b2da4a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2313a74de4244fc8391536b606708b64b2e074001b275e4471dfab4fc83acb17\"" Jan 20 03:07:22.960987 containerd[1558]: time="2026-01-20T03:07:22.960953920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:23.017255 containerd[1558]: time="2026-01-20T03:07:23.017163145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:23.018717 containerd[1558]: time="2026-01-20T03:07:23.018649307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:23.018811 containerd[1558]: time="2026-01-20T03:07:23.018737522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:23.018956 kubelet[2701]: E0120 03:07:23.018913 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:23.018998 kubelet[2701]: E0120 03:07:23.018964 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:23.019086 kubelet[2701]: E0120 03:07:23.019055 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c8d49f8f6-869f9_calico-apiserver(68dce8b3-c3fe-40f2-a705-b41b9b2da4a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:23.019141 kubelet[2701]: E0120 03:07:23.019119 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:23.513661 kubelet[2701]: E0120 03:07:23.513543 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:23.514583 containerd[1558]: time="2026-01-20T03:07:23.514025820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrxcm,Uid:4a4f82c3-d2e5-4cf9-8df0-d5821866627e,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:23.650231 systemd-networkd[1460]: cali022f3415891: Link UP Jan 20 03:07:23.650575 systemd-networkd[1460]: cali022f3415891: Gained carrier Jan 20 03:07:23.668399 containerd[1558]: 2026-01-20 03:07:23.565 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mrxcm-eth0 coredns-66bc5c9577- kube-system 4a4f82c3-d2e5-4cf9-8df0-d5821866627e 856 0 2026-01-20 03:06:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mrxcm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali022f3415891 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-" Jan 20 03:07:23.668399 containerd[1558]: 2026-01-20 03:07:23.565 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.668399 containerd[1558]: 2026-01-20 03:07:23.597 [INFO][4460] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" HandleID="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Workload="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.597 [INFO][4460] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" HandleID="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Workload="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b8da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mrxcm", "timestamp":"2026-01-20 03:07:23.597426495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.597 [INFO][4460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.597 [INFO][4460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.597 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.604 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" host="localhost" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.614 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.619 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.622 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.625 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:23.669187 containerd[1558]: 2026-01-20 03:07:23.625 [INFO][4460] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" host="localhost" Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.627 [INFO][4460] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786 Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.633 [INFO][4460] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" host="localhost" Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.641 [INFO][4460] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" host="localhost" Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.641 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" host="localhost" Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.642 [INFO][4460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:23.669738 containerd[1558]: 2026-01-20 03:07:23.642 [INFO][4460] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" HandleID="k8s-pod-network.c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Workload="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.646 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mrxcm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a4f82c3-d2e5-4cf9-8df0-d5821866627e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mrxcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali022f3415891", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.646 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.646 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali022f3415891 ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.649 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.649 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mrxcm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4a4f82c3-d2e5-4cf9-8df0-d5821866627e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786", Pod:"coredns-66bc5c9577-mrxcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali022f3415891", MAC:"96:f4:2f:b7:59:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:23.669916 containerd[1558]: 2026-01-20 03:07:23.662 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" Namespace="kube-system" Pod="coredns-66bc5c9577-mrxcm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mrxcm-eth0" Jan 20 03:07:23.703291 containerd[1558]: time="2026-01-20T03:07:23.703228383Z" level=info msg="connecting to shim c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786" address="unix:///run/containerd/s/0d835d9ea6a20848d3d2b1b79e2a5f65850cf2284d734b37696f85c67ae832f8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:23.726928 kubelet[2701]: E0120 03:07:23.726679 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:23.734679 kubelet[2701]: E0120 03:07:23.734618 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:23.746833 systemd[1]: Started cri-containerd-c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786.scope - libcontainer container c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786. Jan 20 03:07:23.777436 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:23.822282 containerd[1558]: time="2026-01-20T03:07:23.822198043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrxcm,Uid:4a4f82c3-d2e5-4cf9-8df0-d5821866627e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786\"" Jan 20 03:07:23.823043 kubelet[2701]: E0120 03:07:23.822988 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:23.829754 containerd[1558]: time="2026-01-20T03:07:23.829212537Z" level=info msg="CreateContainer within sandbox \"c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:07:23.846477 containerd[1558]: time="2026-01-20T03:07:23.846388918Z" level=info msg="Container 5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:23.847032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550261717.mount: Deactivated successfully. Jan 20 03:07:23.856797 containerd[1558]: time="2026-01-20T03:07:23.856706709Z" level=info msg="CreateContainer within sandbox \"c4cb89dce52f2338064ebfa360e5c87c3091510599b139378ea6bd8c3a372786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af\"" Jan 20 03:07:23.857822 containerd[1558]: time="2026-01-20T03:07:23.857750512Z" level=info msg="StartContainer for \"5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af\"" Jan 20 03:07:23.858789 containerd[1558]: time="2026-01-20T03:07:23.858738136Z" level=info msg="connecting to shim 5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af" address="unix:///run/containerd/s/0d835d9ea6a20848d3d2b1b79e2a5f65850cf2284d734b37696f85c67ae832f8" protocol=ttrpc version=3 Jan 20 03:07:23.883189 systemd[1]: Started cri-containerd-5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af.scope - libcontainer container 5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af. Jan 20 03:07:23.922051 containerd[1558]: time="2026-01-20T03:07:23.921968250Z" level=info msg="StartContainer for \"5633ded6db6b7d4957de6f320917411af30a05bc035a4f08141fb591194111af\" returns successfully" Jan 20 03:07:24.441243 systemd-networkd[1460]: cali9ca8c4a2512: Gained IPv6LL Jan 20 03:07:24.505238 systemd-networkd[1460]: calidf630306ea5: Gained IPv6LL Jan 20 03:07:24.527320 containerd[1558]: time="2026-01-20T03:07:24.527202981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgb8x,Uid:bb663801-c52b-48d5-9ddb-4fcd0f5aab67,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:24.529579 containerd[1558]: time="2026-01-20T03:07:24.529530916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-9f5w7,Uid:9eb8a6d4-b655-41d5-bb6a-48e492c0056f,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:24.661160 systemd-networkd[1460]: calidc2458a606e: Link UP Jan 20 03:07:24.664743 systemd-networkd[1460]: calidc2458a606e: Gained carrier Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.577 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hgb8x-eth0 csi-node-driver- calico-system bb663801-c52b-48d5-9ddb-4fcd0f5aab67 747 0 2026-01-20 03:07:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hgb8x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc2458a606e [] [] }} ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.577 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4590] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" HandleID="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Workload="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4590] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" HandleID="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Workload="localhost-k8s-csi--node--driver--hgb8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525a20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hgb8x", "timestamp":"2026-01-20 03:07:24.614216187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4590] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4590] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.622 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.628 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.633 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.636 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.640 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.640 [INFO][4590] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.642 [INFO][4590] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.647 [INFO][4590] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4590] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" host="localhost" Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4590] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:24.684218 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4590] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" HandleID="k8s-pod-network.4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Workload="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.656 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hgb8x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb663801-c52b-48d5-9ddb-4fcd0f5aab67", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hgb8x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc2458a606e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.656 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.656 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc2458a606e ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.667 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.668 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hgb8x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb663801-c52b-48d5-9ddb-4fcd0f5aab67", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba", Pod:"csi-node-driver-hgb8x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc2458a606e", MAC:"5e:76:50:c6:54:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:24.685380 containerd[1558]: 2026-01-20 03:07:24.681 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" Namespace="calico-system" Pod="csi-node-driver-hgb8x" WorkloadEndpoint="localhost-k8s-csi--node--driver--hgb8x-eth0" Jan 20 03:07:24.697186 systemd-networkd[1460]: cali022f3415891: Gained IPv6LL Jan 20 03:07:24.711522 containerd[1558]: time="2026-01-20T03:07:24.711468522Z" level=info msg="connecting to shim 4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba" address="unix:///run/containerd/s/3cdcd0d4ddf3da0883d7626b20046589ec5cf7c9dd4e4a465d255e995d345622" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:24.751222 kubelet[2701]: E0120 03:07:24.751164 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:24.760400 kubelet[2701]: E0120 03:07:24.759996 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:24.760400 kubelet[2701]: E0120 03:07:24.760157 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:24.766196 systemd[1]: Started cri-containerd-4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba.scope - libcontainer container 4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba. Jan 20 03:07:24.796783 systemd-networkd[1460]: cali8ccaa5fd021: Link UP Jan 20 03:07:24.797554 systemd-networkd[1460]: cali8ccaa5fd021: Gained carrier Jan 20 03:07:24.809060 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:24.816256 kubelet[2701]: I0120 03:07:24.816115 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mrxcm" podStartSLOduration=38.816098112 podStartE2EDuration="38.816098112s" podCreationTimestamp="2026-01-20 03:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:07:24.800332155 +0000 UTC m=+44.431124688" watchObservedRunningTime="2026-01-20 03:07:24.816098112 +0000 UTC m=+44.446890646" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.579 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0 calico-apiserver-6c8d49f8f6- calico-apiserver 9eb8a6d4-b655-41d5-bb6a-48e492c0056f 858 0 2026-01-20 03:06:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8d49f8f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c8d49f8f6-9f5w7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ccaa5fd021 [] [] }} ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.579 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4592] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" HandleID="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4592] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" HandleID="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c8d49f8f6-9f5w7", "timestamp":"2026-01-20 03:07:24.614182678 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.614 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.654 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.726 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.734 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.742 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.746 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.753 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.753 [INFO][4592] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.761 [INFO][4592] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5 Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.772 [INFO][4592] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.782 [INFO][4592] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.782 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" host="localhost" Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.782 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:24.830513 containerd[1558]: 2026-01-20 03:07:24.783 [INFO][4592] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" HandleID="k8s-pod-network.ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Workload="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.787 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0", GenerateName:"calico-apiserver-6c8d49f8f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eb8a6d4-b655-41d5-bb6a-48e492c0056f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8d49f8f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c8d49f8f6-9f5w7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ccaa5fd021", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.788 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.788 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ccaa5fd021 ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.800 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.801 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0", GenerateName:"calico-apiserver-6c8d49f8f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eb8a6d4-b655-41d5-bb6a-48e492c0056f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8d49f8f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5", Pod:"calico-apiserver-6c8d49f8f6-9f5w7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ccaa5fd021", MAC:"02:c8:10:ff:42:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:24.831083 containerd[1558]: 2026-01-20 03:07:24.825 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c8d49f8f6-9f5w7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c8d49f8f6--9f5w7-eth0" Jan 20 03:07:24.840119 containerd[1558]: time="2026-01-20T03:07:24.839975206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgb8x,Uid:bb663801-c52b-48d5-9ddb-4fcd0f5aab67,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d12445c9a6172d1966816e94925942f841e92f53b7694f110a7231cd93a53ba\"" Jan 20 03:07:24.843061 containerd[1558]: time="2026-01-20T03:07:24.843031322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:07:24.895239 containerd[1558]: time="2026-01-20T03:07:24.895180735Z" level=info msg="connecting to shim ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5" address="unix:///run/containerd/s/92240e010a1cc041c7c68062e3c7bdaebc1ee97ef4bfada8da4fdfc45ee67879" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:24.926433 containerd[1558]: time="2026-01-20T03:07:24.926284849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:24.927838 containerd[1558]: time="2026-01-20T03:07:24.927792750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:07:24.927982 containerd[1558]: time="2026-01-20T03:07:24.927939521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:07:24.928146 kubelet[2701]: E0120 03:07:24.928069 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:07:24.928202 kubelet[2701]: E0120 03:07:24.928159 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:07:24.928280 kubelet[2701]: E0120 03:07:24.928244 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:24.929852 containerd[1558]: time="2026-01-20T03:07:24.929672637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:07:24.930223 systemd[1]: Started cri-containerd-ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5.scope - libcontainer container ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5. Jan 20 03:07:24.949906 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:24.987542 containerd[1558]: time="2026-01-20T03:07:24.987494072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8d49f8f6-9f5w7,Uid:9eb8a6d4-b655-41d5-bb6a-48e492c0056f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ddb4779ed899d7ea6d2e5f4b5b000f773afad51340bccd575f86edd7897148e5\"" Jan 20 03:07:25.002871 containerd[1558]: time="2026-01-20T03:07:25.002819332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:25.004237 containerd[1558]: time="2026-01-20T03:07:25.004147323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:07:25.004237 containerd[1558]: time="2026-01-20T03:07:25.004182513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:07:25.004401 kubelet[2701]: E0120 03:07:25.004338 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:07:25.004452 kubelet[2701]: E0120 03:07:25.004398 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:07:25.004577 kubelet[2701]: E0120 03:07:25.004543 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:25.004653 kubelet[2701]: E0120 03:07:25.004580 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:25.004909 containerd[1558]: time="2026-01-20T03:07:25.004835223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:25.074373 containerd[1558]: time="2026-01-20T03:07:25.074272972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:25.075764 containerd[1558]: time="2026-01-20T03:07:25.075682706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:25.075952 containerd[1558]: time="2026-01-20T03:07:25.075774747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:25.076165 kubelet[2701]: E0120 03:07:25.076021 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:25.076241 kubelet[2701]: E0120 03:07:25.076111 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:25.076367 kubelet[2701]: E0120 03:07:25.076286 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c8d49f8f6-9f5w7_calico-apiserver(9eb8a6d4-b655-41d5-bb6a-48e492c0056f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:25.076432 kubelet[2701]: E0120 03:07:25.076359 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:25.514409 containerd[1558]: time="2026-01-20T03:07:25.514187797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d65d8c755-qpx77,Uid:25480ded-99d4-43f9-a73a-0b4e4143afb7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:07:25.517246 containerd[1558]: time="2026-01-20T03:07:25.517091569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fc5f84-9b6gt,Uid:a91d0df2-2cb2-4cc3-b13c-61dfadc11b46,Namespace:calico-system,Attempt:0,}" Jan 20 03:07:25.640622 systemd-networkd[1460]: calic66b6370906: Link UP Jan 20 03:07:25.641807 systemd-networkd[1460]: calic66b6370906: Gained carrier Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.559 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0 calico-kube-controllers-8485fc5f84- calico-system a91d0df2-2cb2-4cc3-b13c-61dfadc11b46 851 0 2026-01-20 03:07:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8485fc5f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8485fc5f84-9b6gt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic66b6370906 [] [] }} ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.559 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.592 [INFO][4753] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" HandleID="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Workload="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.592 [INFO][4753] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" HandleID="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Workload="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8485fc5f84-9b6gt", "timestamp":"2026-01-20 03:07:25.59268156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.592 [INFO][4753] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.593 [INFO][4753] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.593 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.601 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.609 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.614 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.617 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.619 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.619 [INFO][4753] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.621 [INFO][4753] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0 Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.627 [INFO][4753] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4753] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" host="localhost" Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4753] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:25.656848 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4753] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" HandleID="k8s-pod-network.970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Workload="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.637 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0", GenerateName:"calico-kube-controllers-8485fc5f84-", Namespace:"calico-system", SelfLink:"", UID:"a91d0df2-2cb2-4cc3-b13c-61dfadc11b46", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8485fc5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8485fc5f84-9b6gt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b6370906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.638 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.638 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic66b6370906 ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.642 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.643 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0", GenerateName:"calico-kube-controllers-8485fc5f84-", Namespace:"calico-system", SelfLink:"", UID:"a91d0df2-2cb2-4cc3-b13c-61dfadc11b46", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8485fc5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0", Pod:"calico-kube-controllers-8485fc5f84-9b6gt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic66b6370906", MAC:"5e:35:dc:dc:37:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:25.657478 containerd[1558]: 2026-01-20 03:07:25.653 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" Namespace="calico-system" Pod="calico-kube-controllers-8485fc5f84-9b6gt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fc5f84--9b6gt-eth0" Jan 20 03:07:25.683671 containerd[1558]: time="2026-01-20T03:07:25.683632056Z" level=info msg="connecting to shim 970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0" address="unix:///run/containerd/s/8a1bb1eaae62c0f20af89f9804270fdc650b9b9d05b096cffec0aa6d53224ba6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:25.719210 systemd[1]: Started cri-containerd-970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0.scope - libcontainer container 970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0. Jan 20 03:07:25.751052 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:25.757755 kubelet[2701]: E0120 03:07:25.757483 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:25.762365 kubelet[2701]: E0120 03:07:25.762264 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:25.763834 kubelet[2701]: E0120 03:07:25.763692 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:25.768370 systemd-networkd[1460]: calic509eb81d28: Link UP Jan 20 03:07:25.768760 systemd-networkd[1460]: calic509eb81d28: Gained carrier Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.560 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0 calico-apiserver-6d65d8c755- calico-apiserver 25480ded-99d4-43f9-a73a-0b4e4143afb7 853 0 2026-01-20 03:07:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d65d8c755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d65d8c755-qpx77 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic509eb81d28 [] [] }} ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.561 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.596 [INFO][4759] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" HandleID="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Workload="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.600 [INFO][4759] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" HandleID="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Workload="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d65d8c755-qpx77", "timestamp":"2026-01-20 03:07:25.596614826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.601 [INFO][4759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.635 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.704 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.712 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.721 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.725 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.728 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.728 [INFO][4759] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.730 [INFO][4759] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707 Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.738 [INFO][4759] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.747 [INFO][4759] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.747 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" host="localhost" Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.747 [INFO][4759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:25.798373 containerd[1558]: 2026-01-20 03:07:25.747 [INFO][4759] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" HandleID="k8s-pod-network.3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Workload="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.755 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0", GenerateName:"calico-apiserver-6d65d8c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"25480ded-99d4-43f9-a73a-0b4e4143afb7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d65d8c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d65d8c755-qpx77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic509eb81d28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.755 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.755 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic509eb81d28 ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.770 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.770 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0", GenerateName:"calico-apiserver-6d65d8c755-", Namespace:"calico-apiserver", SelfLink:"", UID:"25480ded-99d4-43f9-a73a-0b4e4143afb7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d65d8c755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707", Pod:"calico-apiserver-6d65d8c755-qpx77", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic509eb81d28", MAC:"9a:52:3f:e2:ee:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:25.799739 containerd[1558]: 2026-01-20 03:07:25.786 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" Namespace="calico-apiserver" Pod="calico-apiserver-6d65d8c755-qpx77" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d65d8c755--qpx77-eth0" Jan 20 03:07:25.830399 containerd[1558]: time="2026-01-20T03:07:25.828606053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fc5f84-9b6gt,Uid:a91d0df2-2cb2-4cc3-b13c-61dfadc11b46,Namespace:calico-system,Attempt:0,} returns sandbox id \"970de14924b2de3c8216a9b1a10869954e6e29e03280bf542466039a9b4966f0\"" Jan 20 03:07:25.832001 containerd[1558]: time="2026-01-20T03:07:25.831918808Z" level=info msg="connecting to shim 3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707" address="unix:///run/containerd/s/765e1f87bf0d6b4d8b91236a3a4cf55ea6d079d63589dc8aa9846b466e9eac5c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:25.834567 containerd[1558]: time="2026-01-20T03:07:25.834088177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:07:25.872294 systemd[1]: Started cri-containerd-3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707.scope - libcontainer container 3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707. Jan 20 03:07:25.889859 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:25.897839 containerd[1558]: time="2026-01-20T03:07:25.897721337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:25.901081 containerd[1558]: time="2026-01-20T03:07:25.900948050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:07:25.901081 containerd[1558]: time="2026-01-20T03:07:25.901048268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:07:25.901600 kubelet[2701]: E0120 03:07:25.901417 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:07:25.901600 kubelet[2701]: E0120 03:07:25.901455 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:07:25.901600 kubelet[2701]: E0120 03:07:25.901519 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8485fc5f84-9b6gt_calico-system(a91d0df2-2cb2-4cc3-b13c-61dfadc11b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:25.901600 kubelet[2701]: E0120 03:07:25.901570 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:25.933310 containerd[1558]: time="2026-01-20T03:07:25.933262745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d65d8c755-qpx77,Uid:25480ded-99d4-43f9-a73a-0b4e4143afb7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3abfb1aaadb801134f9d65f2bbd80c01625d6bbb117a5980cf86f66ce1a09707\"" Jan 20 03:07:25.935355 containerd[1558]: time="2026-01-20T03:07:25.935273068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:25.999514 containerd[1558]: time="2026-01-20T03:07:25.999386670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:26.001068 containerd[1558]: time="2026-01-20T03:07:26.000944484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:26.001068 containerd[1558]: time="2026-01-20T03:07:26.001049147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:26.001407 kubelet[2701]: E0120 03:07:26.001255 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:26.001407 kubelet[2701]: E0120 03:07:26.001334 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:26.001584 kubelet[2701]: E0120 03:07:26.001507 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d65d8c755-qpx77_calico-apiserver(25480ded-99d4-43f9-a73a-0b4e4143afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:26.001584 kubelet[2701]: E0120 03:07:26.001553 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:26.233104 systemd-networkd[1460]: calidc2458a606e: Gained IPv6LL Jan 20 03:07:26.513525 kubelet[2701]: E0120 03:07:26.513319 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:26.514008 containerd[1558]: time="2026-01-20T03:07:26.513961248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mb2dm,Uid:44bc219a-0768-4e39-9391-3f5a2822a096,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:26.671407 systemd-networkd[1460]: cali7b0ebd595af: Link UP Jan 20 03:07:26.673248 systemd-networkd[1460]: cali7b0ebd595af: Gained carrier Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.550 [INFO][4882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mb2dm-eth0 coredns-66bc5c9577- kube-system 44bc219a-0768-4e39-9391-3f5a2822a096 846 0 2026-01-20 03:06:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mb2dm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b0ebd595af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.551 [INFO][4882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.581 [INFO][4896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" HandleID="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Workload="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.581 [INFO][4896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" HandleID="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Workload="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mb2dm", "timestamp":"2026-01-20 03:07:26.581700652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.582 [INFO][4896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.582 [INFO][4896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.582 [INFO][4896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.602 [INFO][4896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.612 [INFO][4896] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.630 [INFO][4896] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.633 [INFO][4896] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.638 [INFO][4896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.639 [INFO][4896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.645 [INFO][4896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.652 [INFO][4896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.665 [INFO][4896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.665 [INFO][4896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" host="localhost" Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.665 [INFO][4896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:07:26.691509 containerd[1558]: 2026-01-20 03:07:26.665 [INFO][4896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" HandleID="k8s-pod-network.2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Workload="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.667 [INFO][4882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mb2dm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44bc219a-0768-4e39-9391-3f5a2822a096", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mb2dm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b0ebd595af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.668 [INFO][4882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.668 [INFO][4882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b0ebd595af ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.672 [INFO][4882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.673 [INFO][4882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mb2dm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44bc219a-0768-4e39-9391-3f5a2822a096", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 6, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c", Pod:"coredns-66bc5c9577-mb2dm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b0ebd595af", MAC:"46:36:70:4b:8e:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:07:26.692450 containerd[1558]: 2026-01-20 03:07:26.687 [INFO][4882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" Namespace="kube-system" Pod="coredns-66bc5c9577-mb2dm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mb2dm-eth0" Jan 20 03:07:26.721967 containerd[1558]: time="2026-01-20T03:07:26.721541647Z" level=info msg="connecting to shim 2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c" address="unix:///run/containerd/s/6e5db93d4571e088d2ca5444f52e5f831337fc70a8e12c891149e747ffdfd56a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:26.746849 systemd-networkd[1460]: cali8ccaa5fd021: Gained IPv6LL Jan 20 03:07:26.754217 systemd[1]: Started cri-containerd-2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c.scope - libcontainer container 2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c. Jan 20 03:07:26.772644 kubelet[2701]: E0120 03:07:26.770979 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:26.772644 kubelet[2701]: E0120 03:07:26.771513 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:26.772644 kubelet[2701]: E0120 03:07:26.771702 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:26.775865 kubelet[2701]: E0120 03:07:26.775769 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:26.780583 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:26.798247 kubelet[2701]: E0120 03:07:26.798121 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:26.848230 containerd[1558]: time="2026-01-20T03:07:26.848070926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mb2dm,Uid:44bc219a-0768-4e39-9391-3f5a2822a096,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c\"" Jan 20 03:07:26.850266 kubelet[2701]: E0120 03:07:26.850028 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:26.855020 containerd[1558]: time="2026-01-20T03:07:26.854991939Z" level=info msg="CreateContainer within sandbox \"2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:07:26.886668 containerd[1558]: time="2026-01-20T03:07:26.886611762Z" level=info msg="Container 55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:26.888519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113025958.mount: Deactivated successfully. Jan 20 03:07:26.895702 containerd[1558]: time="2026-01-20T03:07:26.895634563Z" level=info msg="CreateContainer within sandbox \"2aabade10ecfc2ef5346881a4a18065cd45e290067eee309d84d63a70956539c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749\"" Jan 20 03:07:26.896579 containerd[1558]: time="2026-01-20T03:07:26.896554489Z" level=info msg="StartContainer for \"55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749\"" Jan 20 03:07:26.897548 containerd[1558]: time="2026-01-20T03:07:26.897451921Z" level=info msg="connecting to shim 55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749" address="unix:///run/containerd/s/6e5db93d4571e088d2ca5444f52e5f831337fc70a8e12c891149e747ffdfd56a" protocol=ttrpc version=3 Jan 20 03:07:26.923117 systemd[1]: Started cri-containerd-55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749.scope - libcontainer container 55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749. Jan 20 03:07:26.966935 containerd[1558]: time="2026-01-20T03:07:26.966820438Z" level=info msg="StartContainer for \"55fabcf47e13eede99f28c97ba04f33a03e709f4d2c7e161a1d07f467f8a4749\" returns successfully" Jan 20 03:07:27.065087 systemd-networkd[1460]: calic66b6370906: Gained IPv6LL Jan 20 03:07:27.472965 systemd-networkd[1460]: calic509eb81d28: Gained IPv6LL Jan 20 03:07:27.552796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384974778.mount: Deactivated successfully. Jan 20 03:07:27.775053 kubelet[2701]: E0120 03:07:27.774295 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:27.775833 kubelet[2701]: E0120 03:07:27.775711 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:27.775833 kubelet[2701]: E0120 03:07:27.775779 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:27.815023 kubelet[2701]: I0120 03:07:27.814770 2701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mb2dm" podStartSLOduration=41.814750601 podStartE2EDuration="41.814750601s" podCreationTimestamp="2026-01-20 03:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:07:27.814101371 +0000 UTC m=+47.444893924" watchObservedRunningTime="2026-01-20 03:07:27.814750601 +0000 UTC m=+47.445543144" Jan 20 03:07:27.834108 systemd-networkd[1460]: cali7b0ebd595af: Gained IPv6LL Jan 20 03:07:28.779162 kubelet[2701]: E0120 03:07:28.779065 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:29.138285 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:56070.service - OpenSSH per-connection server daemon (10.0.0.1:56070). Jan 20 03:07:29.232758 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 56070 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:29.235371 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:29.243646 systemd-logind[1542]: New session 8 of user core. Jan 20 03:07:29.258172 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 03:07:29.416033 sshd[5009]: Connection closed by 10.0.0.1 port 56070 Jan 20 03:07:29.416385 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:29.421305 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:56070.service: Deactivated successfully. Jan 20 03:07:29.424448 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 03:07:29.425612 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Jan 20 03:07:29.427925 systemd-logind[1542]: Removed session 8. Jan 20 03:07:29.782431 kubelet[2701]: E0120 03:07:29.782332 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:32.512479 containerd[1558]: time="2026-01-20T03:07:32.512433576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:07:32.584902 containerd[1558]: time="2026-01-20T03:07:32.584788474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:32.586379 containerd[1558]: time="2026-01-20T03:07:32.586213127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:07:32.586450 containerd[1558]: time="2026-01-20T03:07:32.586379769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:07:32.586745 kubelet[2701]: E0120 03:07:32.586648 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:32.586745 kubelet[2701]: E0120 03:07:32.586733 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:32.587207 kubelet[2701]: E0120 03:07:32.586832 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:32.588914 containerd[1558]: time="2026-01-20T03:07:32.588815592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:07:32.658966 containerd[1558]: time="2026-01-20T03:07:32.658845457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:32.660612 containerd[1558]: time="2026-01-20T03:07:32.660447427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:07:32.660762 containerd[1558]: time="2026-01-20T03:07:32.660593149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:07:32.660974 kubelet[2701]: E0120 03:07:32.660846 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:32.660974 kubelet[2701]: E0120 03:07:32.660957 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:32.661168 kubelet[2701]: E0120 03:07:32.661074 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:32.661168 kubelet[2701]: E0120 03:07:32.661130 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:34.432954 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:51652.service - OpenSSH per-connection server daemon (10.0.0.1:51652). Jan 20 03:07:34.496601 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 51652 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:34.498798 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:34.506581 systemd-logind[1542]: New session 9 of user core. Jan 20 03:07:34.512195 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 03:07:34.644959 sshd[5037]: Connection closed by 10.0.0.1 port 51652 Jan 20 03:07:34.645488 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:34.650130 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:51652.service: Deactivated successfully. Jan 20 03:07:34.653241 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 03:07:34.656841 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Jan 20 03:07:34.658446 systemd-logind[1542]: Removed session 9. Jan 20 03:07:37.512707 containerd[1558]: time="2026-01-20T03:07:37.512665426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:07:37.584715 containerd[1558]: time="2026-01-20T03:07:37.584639387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:37.586080 containerd[1558]: time="2026-01-20T03:07:37.586007892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:07:37.586157 containerd[1558]: time="2026-01-20T03:07:37.586036052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:07:37.586426 kubelet[2701]: E0120 03:07:37.586329 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:07:37.586426 kubelet[2701]: E0120 03:07:37.586407 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:07:37.587092 kubelet[2701]: E0120 03:07:37.586472 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:37.587307 containerd[1558]: time="2026-01-20T03:07:37.587256504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:07:37.653035 containerd[1558]: time="2026-01-20T03:07:37.652842107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:37.654497 containerd[1558]: time="2026-01-20T03:07:37.654397167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:07:37.654601 containerd[1558]: time="2026-01-20T03:07:37.654493523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:07:37.654755 kubelet[2701]: E0120 03:07:37.654671 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:07:37.654755 kubelet[2701]: E0120 03:07:37.654746 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:07:37.654870 kubelet[2701]: E0120 03:07:37.654827 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:37.655029 kubelet[2701]: E0120 03:07:37.654866 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:38.512584 containerd[1558]: time="2026-01-20T03:07:38.512484207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:38.570923 containerd[1558]: time="2026-01-20T03:07:38.570818728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:38.572394 containerd[1558]: time="2026-01-20T03:07:38.572337824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:38.572501 containerd[1558]: time="2026-01-20T03:07:38.572431730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:38.572799 kubelet[2701]: E0120 03:07:38.572736 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:38.572854 kubelet[2701]: E0120 03:07:38.572803 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:38.573228 kubelet[2701]: E0120 03:07:38.573117 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c8d49f8f6-869f9_calico-apiserver(68dce8b3-c3fe-40f2-a705-b41b9b2da4a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:38.573228 kubelet[2701]: E0120 03:07:38.573192 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:38.573730 containerd[1558]: time="2026-01-20T03:07:38.573596775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:07:38.651512 containerd[1558]: time="2026-01-20T03:07:38.651329011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:38.653065 containerd[1558]: time="2026-01-20T03:07:38.652989344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:07:38.653200 containerd[1558]: time="2026-01-20T03:07:38.653024502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:38.653477 kubelet[2701]: E0120 03:07:38.653343 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:07:38.653954 kubelet[2701]: E0120 03:07:38.653488 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:07:38.653954 kubelet[2701]: E0120 03:07:38.653597 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8qts6_calico-system(d400130d-fd02-4b87-8160-4ba74bd8b376): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:38.653954 kubelet[2701]: E0120 03:07:38.653641 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:39.512073 containerd[1558]: time="2026-01-20T03:07:39.512014902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:39.574174 containerd[1558]: time="2026-01-20T03:07:39.574069272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:39.575699 containerd[1558]: time="2026-01-20T03:07:39.575609959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:39.575772 containerd[1558]: time="2026-01-20T03:07:39.575712939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:39.576049 kubelet[2701]: E0120 03:07:39.575965 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:39.576099 kubelet[2701]: E0120 03:07:39.576045 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:39.576241 kubelet[2701]: E0120 03:07:39.576180 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c8d49f8f6-9f5w7_calico-apiserver(9eb8a6d4-b655-41d5-bb6a-48e492c0056f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:39.576273 kubelet[2701]: E0120 03:07:39.576243 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:39.661943 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:51662.service - OpenSSH per-connection server daemon (10.0.0.1:51662). Jan 20 03:07:39.726444 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 51662 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:39.728477 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:39.735295 systemd-logind[1542]: New session 10 of user core. Jan 20 03:07:39.744089 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 03:07:39.893357 sshd[5056]: Connection closed by 10.0.0.1 port 51662 Jan 20 03:07:39.894947 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:39.904146 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:51662.service: Deactivated successfully. Jan 20 03:07:39.906347 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 03:07:39.907853 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Jan 20 03:07:39.912326 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668). Jan 20 03:07:39.914767 systemd-logind[1542]: Removed session 10. Jan 20 03:07:39.976089 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:39.978749 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:39.986509 systemd-logind[1542]: New session 11 of user core. Jan 20 03:07:40.001230 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 03:07:40.198166 sshd[5082]: Connection closed by 10.0.0.1 port 51668 Jan 20 03:07:40.200552 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:40.215934 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:51668.service: Deactivated successfully. Jan 20 03:07:40.219804 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 03:07:40.224041 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Jan 20 03:07:40.233277 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:51680.service - OpenSSH per-connection server daemon (10.0.0.1:51680). Jan 20 03:07:40.235966 systemd-logind[1542]: Removed session 11. Jan 20 03:07:40.296262 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 51680 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:40.298568 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:40.304168 systemd-logind[1542]: New session 12 of user core. Jan 20 03:07:40.312060 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 03:07:40.461248 sshd[5098]: Connection closed by 10.0.0.1 port 51680 Jan 20 03:07:40.461633 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:40.466827 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:51680.service: Deactivated successfully. Jan 20 03:07:40.470468 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 03:07:40.473191 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Jan 20 03:07:40.474697 systemd-logind[1542]: Removed session 12. Jan 20 03:07:40.516389 containerd[1558]: time="2026-01-20T03:07:40.516020163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:07:40.573786 containerd[1558]: time="2026-01-20T03:07:40.573582823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:40.575605 containerd[1558]: time="2026-01-20T03:07:40.575446325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:07:40.575605 containerd[1558]: time="2026-01-20T03:07:40.575455881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:07:40.576130 kubelet[2701]: E0120 03:07:40.575634 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:40.576130 kubelet[2701]: E0120 03:07:40.575677 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:07:40.576130 kubelet[2701]: E0120 03:07:40.575833 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6d65d8c755-qpx77_calico-apiserver(25480ded-99d4-43f9-a73a-0b4e4143afb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:40.576130 kubelet[2701]: E0120 03:07:40.575867 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:40.578080 containerd[1558]: time="2026-01-20T03:07:40.576222890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:07:40.642679 containerd[1558]: time="2026-01-20T03:07:40.642543250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:40.644036 containerd[1558]: time="2026-01-20T03:07:40.643941870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:07:40.644036 containerd[1558]: time="2026-01-20T03:07:40.643982392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:07:40.644267 kubelet[2701]: E0120 03:07:40.644211 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:07:40.644332 kubelet[2701]: E0120 03:07:40.644267 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:07:40.644380 kubelet[2701]: E0120 03:07:40.644361 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8485fc5f84-9b6gt_calico-system(a91d0df2-2cb2-4cc3-b13c-61dfadc11b46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:40.644541 kubelet[2701]: E0120 03:07:40.644393 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:44.513025 kubelet[2701]: E0120 03:07:44.512835 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:45.473767 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:44502.service - OpenSSH per-connection server daemon (10.0.0.1:44502). Jan 20 03:07:45.525703 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 44502 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:45.527378 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:45.532618 systemd-logind[1542]: New session 13 of user core. Jan 20 03:07:45.542092 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 03:07:45.675436 sshd[5117]: Connection closed by 10.0.0.1 port 44502 Jan 20 03:07:45.675978 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:45.693123 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:44502.service: Deactivated successfully. Jan 20 03:07:45.695387 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 03:07:45.696357 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Jan 20 03:07:45.699467 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:44508.service - OpenSSH per-connection server daemon (10.0.0.1:44508). Jan 20 03:07:45.700221 systemd-logind[1542]: Removed session 13. Jan 20 03:07:45.753456 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 44508 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:45.755527 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:45.761748 systemd-logind[1542]: New session 14 of user core. Jan 20 03:07:45.775192 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 03:07:46.052436 sshd[5134]: Connection closed by 10.0.0.1 port 44508 Jan 20 03:07:46.053269 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:46.065327 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:44508.service: Deactivated successfully. Jan 20 03:07:46.068442 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 03:07:46.069795 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Jan 20 03:07:46.073800 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:44516.service - OpenSSH per-connection server daemon (10.0.0.1:44516). Jan 20 03:07:46.075418 systemd-logind[1542]: Removed session 14. Jan 20 03:07:46.154301 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 44516 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:46.156297 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:46.163983 systemd-logind[1542]: New session 15 of user core. Jan 20 03:07:46.170231 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 03:07:46.760235 sshd[5148]: Connection closed by 10.0.0.1 port 44516 Jan 20 03:07:46.762165 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:46.770577 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:44516.service: Deactivated successfully. Jan 20 03:07:46.774115 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 03:07:46.778104 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Jan 20 03:07:46.782731 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:44518.service - OpenSSH per-connection server daemon (10.0.0.1:44518). Jan 20 03:07:46.785010 systemd-logind[1542]: Removed session 15. Jan 20 03:07:46.835850 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 44518 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:46.837622 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:46.843817 systemd-logind[1542]: New session 16 of user core. Jan 20 03:07:46.851083 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 03:07:47.097988 sshd[5169]: Connection closed by 10.0.0.1 port 44518 Jan 20 03:07:47.098193 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:47.108776 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:44518.service: Deactivated successfully. Jan 20 03:07:47.112310 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 03:07:47.114132 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Jan 20 03:07:47.117494 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Jan 20 03:07:47.119448 systemd-logind[1542]: Removed session 16. Jan 20 03:07:47.177190 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:47.179251 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:47.185981 systemd-logind[1542]: New session 17 of user core. Jan 20 03:07:47.191127 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 03:07:47.336956 sshd[5184]: Connection closed by 10.0.0.1 port 44534 Jan 20 03:07:47.337314 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:47.341222 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:44534.service: Deactivated successfully. Jan 20 03:07:47.343377 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 03:07:47.345554 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Jan 20 03:07:47.347869 systemd-logind[1542]: Removed session 17. Jan 20 03:07:48.819249 kubelet[2701]: E0120 03:07:48.819148 2701 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:49.515766 kubelet[2701]: E0120 03:07:49.515370 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:07:52.356174 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:44550.service - OpenSSH per-connection server daemon (10.0.0.1:44550). Jan 20 03:07:52.425757 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 44550 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:52.427701 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:52.433622 systemd-logind[1542]: New session 18 of user core. Jan 20 03:07:52.443129 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 03:07:52.511998 kubelet[2701]: E0120 03:07:52.511622 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-9f5w7" podUID="9eb8a6d4-b655-41d5-bb6a-48e492c0056f" Jan 20 03:07:52.512551 kubelet[2701]: E0120 03:07:52.512505 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8qts6" podUID="d400130d-fd02-4b87-8160-4ba74bd8b376" Jan 20 03:07:52.581580 sshd[5231]: Connection closed by 10.0.0.1 port 44550 Jan 20 03:07:52.581992 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:52.586149 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:44550.service: Deactivated successfully. Jan 20 03:07:52.588621 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 03:07:52.590083 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Jan 20 03:07:52.591372 systemd-logind[1542]: Removed session 18. Jan 20 03:07:53.511390 kubelet[2701]: E0120 03:07:53.511297 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8485fc5f84-9b6gt" podUID="a91d0df2-2cb2-4cc3-b13c-61dfadc11b46" Jan 20 03:07:53.511390 kubelet[2701]: E0120 03:07:53.511297 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8d49f8f6-869f9" podUID="68dce8b3-c3fe-40f2-a705-b41b9b2da4a7" Jan 20 03:07:54.513334 kubelet[2701]: E0120 03:07:54.513199 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d65d8c755-qpx77" podUID="25480ded-99d4-43f9-a73a-0b4e4143afb7" Jan 20 03:07:56.512721 containerd[1558]: time="2026-01-20T03:07:56.512655384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:07:56.574114 containerd[1558]: time="2026-01-20T03:07:56.574051484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:56.577028 containerd[1558]: time="2026-01-20T03:07:56.575805570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:07:56.577028 containerd[1558]: time="2026-01-20T03:07:56.575862712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:07:56.577195 kubelet[2701]: E0120 03:07:56.576221 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:56.577195 kubelet[2701]: E0120 03:07:56.576296 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:07:56.577195 kubelet[2701]: E0120 03:07:56.576450 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:56.577981 containerd[1558]: time="2026-01-20T03:07:56.577833911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:07:56.686779 containerd[1558]: time="2026-01-20T03:07:56.686525770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:07:56.689856 containerd[1558]: time="2026-01-20T03:07:56.688828043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:07:56.689856 containerd[1558]: time="2026-01-20T03:07:56.689026798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:07:56.689984 kubelet[2701]: E0120 03:07:56.689292 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:56.689984 kubelet[2701]: E0120 03:07:56.689348 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:07:56.689984 kubelet[2701]: E0120 03:07:56.689473 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f5d484d94-62ssc_calico-system(8c9999c8-e94a-48c3-bb30-f3c2a906e952): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:07:56.690068 kubelet[2701]: E0120 03:07:56.689525 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f5d484d94-62ssc" podUID="8c9999c8-e94a-48c3-bb30-f3c2a906e952" Jan 20 03:07:57.601769 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:53878.service - OpenSSH per-connection server daemon (10.0.0.1:53878). Jan 20 03:07:57.683036 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 53878 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:57.685281 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:57.692189 systemd-logind[1542]: New session 19 of user core. Jan 20 03:07:57.702149 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 03:07:57.854679 sshd[5250]: Connection closed by 10.0.0.1 port 53878 Jan 20 03:07:57.857561 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:57.865471 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:53878.service: Deactivated successfully. Jan 20 03:07:57.873499 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 03:07:57.875853 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Jan 20 03:07:57.878058 systemd-logind[1542]: Removed session 19. Jan 20 03:08:00.513940 containerd[1558]: time="2026-01-20T03:08:00.513851231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:08:00.582962 containerd[1558]: time="2026-01-20T03:08:00.582231294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:08:00.587350 containerd[1558]: time="2026-01-20T03:08:00.587235817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:08:00.587449 containerd[1558]: time="2026-01-20T03:08:00.587372539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:08:00.588153 kubelet[2701]: E0120 03:08:00.587990 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:08:00.589063 kubelet[2701]: E0120 03:08:00.588477 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:08:00.589674 kubelet[2701]: E0120 03:08:00.589587 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:08:00.591275 containerd[1558]: time="2026-01-20T03:08:00.591074755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:08:00.656191 containerd[1558]: time="2026-01-20T03:08:00.655828701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:08:00.662941 containerd[1558]: time="2026-01-20T03:08:00.662753419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:08:00.662941 containerd[1558]: time="2026-01-20T03:08:00.662859792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:08:00.663479 kubelet[2701]: E0120 03:08:00.663211 2701 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:08:00.663479 kubelet[2701]: E0120 03:08:00.663254 2701 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:08:00.663479 kubelet[2701]: E0120 03:08:00.663376 2701 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgb8x_calico-system(bb663801-c52b-48d5-9ddb-4fcd0f5aab67): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:08:00.663698 kubelet[2701]: E0120 03:08:00.663434 2701 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgb8x" podUID="bb663801-c52b-48d5-9ddb-4fcd0f5aab67" Jan 20 03:08:02.874405 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:53904.service - OpenSSH per-connection server daemon (10.0.0.1:53904). Jan 20 03:08:02.958293 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 53904 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:02.961968 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:02.971483 systemd-logind[1542]: New session 20 of user core. Jan 20 03:08:02.975469 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 03:08:03.171770 sshd[5273]: Connection closed by 10.0.0.1 port 53904 Jan 20 03:08:03.172018 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:03.182147 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:53904.service: Deactivated successfully. Jan 20 03:08:03.190457 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 03:08:03.192164 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Jan 20 03:08:03.194284 systemd-logind[1542]: Removed session 20.