Oct 27 16:20:14.381699 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 14:23:46 -00 2025 Oct 27 16:20:14.381736 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=37c8c81d19decb4eff6c5d5b00a7b1383969f269a46eefe3981c74c9d47fcf7b Oct 27 16:20:14.381748 kernel: BIOS-provided physical RAM map: Oct 27 16:20:14.381755 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 27 16:20:14.381762 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 27 16:20:14.381769 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 27 16:20:14.381777 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 27 16:20:14.381784 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 27 16:20:14.381794 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 27 16:20:14.381803 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 27 16:20:14.381810 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 16:20:14.381817 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 27 16:20:14.381824 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 16:20:14.381831 kernel: NX (Execute Disable) protection: active Oct 27 16:20:14.381841 kernel: APIC: Static calls initialized Oct 27 16:20:14.381849 kernel: SMBIOS 2.8 present. Oct 27 16:20:14.381859 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 27 16:20:14.381867 kernel: DMI: Memory slots populated: 1/1 Oct 27 16:20:14.381874 kernel: Hypervisor detected: KVM Oct 27 16:20:14.381882 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 16:20:14.381889 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 16:20:14.381896 kernel: kvm-clock: using sched offset of 4066452409 cycles Oct 27 16:20:14.381904 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 16:20:14.381912 kernel: tsc: Detected 2794.748 MHz processor Oct 27 16:20:14.381923 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 16:20:14.381931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 16:20:14.381939 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 16:20:14.381947 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 27 16:20:14.381955 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 16:20:14.381963 kernel: Using GB pages for direct mapping Oct 27 16:20:14.381971 kernel: ACPI: Early table checksum verification disabled Oct 27 16:20:14.381981 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 27 16:20:14.381990 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.381998 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382006 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382013 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 27 16:20:14.382021 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382029 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382039 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382048 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 16:20:14.382059 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 27 16:20:14.382067 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 27 16:20:14.382075 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 27 16:20:14.382085 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 27 16:20:14.382093 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 27 16:20:14.382101 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 27 16:20:14.382109 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 27 16:20:14.382117 kernel: No NUMA configuration found Oct 27 16:20:14.382125 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 27 16:20:14.382135 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 27 16:20:14.382143 kernel: Zone ranges: Oct 27 16:20:14.382151 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 16:20:14.382159 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 27 16:20:14.382167 kernel: Normal empty Oct 27 16:20:14.382175 kernel: Device empty Oct 27 16:20:14.382183 kernel: Movable zone start for each node Oct 27 16:20:14.382191 kernel: Early memory node ranges Oct 27 16:20:14.382216 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 27 16:20:14.382242 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 27 16:20:14.382252 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 27 16:20:14.382261 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 16:20:14.382276 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 16:20:14.382284 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 27 16:20:14.382295 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 16:20:14.382303 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 16:20:14.382315 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 16:20:14.382323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 16:20:14.382334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 16:20:14.382342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 16:20:14.382350 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 16:20:14.382358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 16:20:14.382366 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 16:20:14.382377 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 16:20:14.382385 kernel: TSC deadline timer available Oct 27 16:20:14.382393 kernel: CPU topo: Max. logical packages: 1 Oct 27 16:20:14.382401 kernel: CPU topo: Max. logical dies: 1 Oct 27 16:20:14.382409 kernel: CPU topo: Max. dies per package: 1 Oct 27 16:20:14.382417 kernel: CPU topo: Max. threads per core: 1 Oct 27 16:20:14.382425 kernel: CPU topo: Num. cores per package: 4 Oct 27 16:20:14.382432 kernel: CPU topo: Num. threads per package: 4 Oct 27 16:20:14.382443 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 27 16:20:14.382450 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 16:20:14.382459 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 27 16:20:14.382467 kernel: kvm-guest: setup PV sched yield Oct 27 16:20:14.382475 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 27 16:20:14.382482 kernel: Booting paravirtualized kernel on KVM Oct 27 16:20:14.382491 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 16:20:14.382501 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 27 16:20:14.382509 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 27 16:20:14.382517 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 27 16:20:14.382525 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 27 16:20:14.382533 kernel: kvm-guest: PV spinlocks enabled Oct 27 16:20:14.382541 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 27 16:20:14.382550 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=37c8c81d19decb4eff6c5d5b00a7b1383969f269a46eefe3981c74c9d47fcf7b Oct 27 16:20:14.382561 kernel: random: crng init done Oct 27 16:20:14.382569 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 16:20:14.382577 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 16:20:14.382585 kernel: Fallback order for Node 0: 0 Oct 27 16:20:14.382593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 27 16:20:14.382601 kernel: Policy zone: DMA32 Oct 27 16:20:14.382609 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 16:20:14.382620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 16:20:14.382628 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 16:20:14.382636 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 16:20:14.382644 kernel: Dynamic Preempt: voluntary Oct 27 16:20:14.382652 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 16:20:14.382661 kernel: rcu: RCU event tracing is enabled. Oct 27 16:20:14.382669 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 16:20:14.382679 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 16:20:14.382690 kernel: Rude variant of Tasks RCU enabled. Oct 27 16:20:14.382698 kernel: Tracing variant of Tasks RCU enabled. Oct 27 16:20:14.382706 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 16:20:14.382714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 16:20:14.382722 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 16:20:14.382730 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 16:20:14.382739 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 16:20:14.382749 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 27 16:20:14.382757 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 16:20:14.382773 kernel: Console: colour VGA+ 80x25 Oct 27 16:20:14.382783 kernel: printk: legacy console [ttyS0] enabled Oct 27 16:20:14.382792 kernel: ACPI: Core revision 20240827 Oct 27 16:20:14.382800 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 16:20:14.382809 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 16:20:14.382817 kernel: x2apic enabled Oct 27 16:20:14.382825 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 16:20:14.382838 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 27 16:20:14.382847 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 27 16:20:14.382855 kernel: kvm-guest: setup PV IPIs Oct 27 16:20:14.382863 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 16:20:14.382875 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 16:20:14.382883 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 27 16:20:14.382892 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 16:20:14.382900 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 16:20:14.382908 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 16:20:14.382917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 16:20:14.382925 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 16:20:14.382936 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 16:20:14.382944 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 16:20:14.382953 kernel: active return thunk: retbleed_return_thunk Oct 27 16:20:14.382961 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 16:20:14.382969 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 16:20:14.382978 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 16:20:14.382986 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 27 16:20:14.382998 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 27 16:20:14.383006 kernel: active return thunk: srso_return_thunk Oct 27 16:20:14.383014 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 27 16:20:14.383023 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 16:20:14.383031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 16:20:14.383039 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 16:20:14.383050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 16:20:14.383059 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 16:20:14.383067 kernel: Freeing SMP alternatives memory: 32K Oct 27 16:20:14.383075 kernel: pid_max: default: 32768 minimum: 301 Oct 27 16:20:14.383084 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 16:20:14.383092 kernel: landlock: Up and running. Oct 27 16:20:14.383100 kernel: SELinux: Initializing. Oct 27 16:20:14.383111 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 16:20:14.383122 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 16:20:14.383130 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 16:20:14.383139 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 16:20:14.383147 kernel: ... version: 0 Oct 27 16:20:14.383155 kernel: ... bit width: 48 Oct 27 16:20:14.383164 kernel: ... generic registers: 6 Oct 27 16:20:14.383172 kernel: ... value mask: 0000ffffffffffff Oct 27 16:20:14.383183 kernel: ... max period: 00007fffffffffff Oct 27 16:20:14.383191 kernel: ... fixed-purpose events: 0 Oct 27 16:20:14.383214 kernel: ... event mask: 000000000000003f Oct 27 16:20:14.383222 kernel: signal: max sigframe size: 1776 Oct 27 16:20:14.383230 kernel: rcu: Hierarchical SRCU implementation. Oct 27 16:20:14.383239 kernel: rcu: Max phase no-delay instances is 400. Oct 27 16:20:14.383247 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 16:20:14.383258 kernel: smp: Bringing up secondary CPUs ... Oct 27 16:20:14.383273 kernel: smpboot: x86: Booting SMP configuration: Oct 27 16:20:14.383281 kernel: .... node #0, CPUs: #1 #2 #3 Oct 27 16:20:14.383289 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 16:20:14.383298 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 27 16:20:14.383307 kernel: Memory: 2451432K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114380K reserved, 0K cma-reserved) Oct 27 16:20:14.383316 kernel: devtmpfs: initialized Oct 27 16:20:14.383326 kernel: x86/mm: Memory block size: 128MB Oct 27 16:20:14.383335 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 16:20:14.383343 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 16:20:14.383351 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 16:20:14.383360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 16:20:14.383368 kernel: audit: initializing netlink subsys (disabled) Oct 27 16:20:14.383377 kernel: audit: type=2000 audit(1761582011.869:1): state=initialized audit_enabled=0 res=1 Oct 27 16:20:14.383387 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 16:20:14.383395 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 16:20:14.383404 kernel: cpuidle: using governor menu Oct 27 16:20:14.383412 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 16:20:14.383421 kernel: dca service started, version 1.12.1 Oct 27 16:20:14.383429 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 27 16:20:14.383437 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 27 16:20:14.383448 kernel: PCI: Using configuration type 1 for base access Oct 27 16:20:14.383456 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 16:20:14.383465 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 16:20:14.383473 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 16:20:14.383481 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 16:20:14.383490 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 16:20:14.383498 kernel: ACPI: Added _OSI(Module Device) Oct 27 16:20:14.383509 kernel: ACPI: Added _OSI(Processor Device) Oct 27 16:20:14.383517 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 16:20:14.383525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 16:20:14.383536 kernel: ACPI: Interpreter enabled Oct 27 16:20:14.383544 kernel: ACPI: PM: (supports S0 S3 S5) Oct 27 16:20:14.383553 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 16:20:14.383561 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 16:20:14.383572 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 16:20:14.383581 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 16:20:14.383589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 16:20:14.383845 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 16:20:14.384036 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 16:20:14.384233 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 16:20:14.384250 kernel: PCI host bridge to bus 0000:00 Oct 27 16:20:14.384439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 16:20:14.384606 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 16:20:14.384765 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 16:20:14.384925 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 27 16:20:14.385097 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 27 16:20:14.385290 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 27 16:20:14.385455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 16:20:14.385653 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 16:20:14.385842 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 27 16:20:14.386023 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 27 16:20:14.386465 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 27 16:20:14.386642 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 27 16:20:14.386822 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 16:20:14.387012 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 16:20:14.387212 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 27 16:20:14.387402 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 27 16:20:14.387584 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 27 16:20:14.387771 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 16:20:14.387945 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 27 16:20:14.388118 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 27 16:20:14.388318 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 27 16:20:14.388505 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 16:20:14.388685 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 27 16:20:14.388860 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 27 16:20:14.389115 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 27 16:20:14.389317 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 27 16:20:14.389499 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 16:20:14.389679 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 16:20:14.389863 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 16:20:14.390035 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 27 16:20:14.390224 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 27 16:20:14.390418 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 16:20:14.390591 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 27 16:20:14.390607 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 16:20:14.390616 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 16:20:14.390624 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 16:20:14.390636 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 16:20:14.390645 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 16:20:14.390653 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 16:20:14.390662 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 16:20:14.390672 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 16:20:14.390681 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 16:20:14.390689 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 16:20:14.390698 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 16:20:14.390706 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 16:20:14.390714 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 16:20:14.390723 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 16:20:14.390733 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 16:20:14.390742 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 16:20:14.390750 kernel: iommu: Default domain type: Translated Oct 27 16:20:14.390759 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 16:20:14.390767 kernel: PCI: Using ACPI for IRQ routing Oct 27 16:20:14.390775 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 16:20:14.390784 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 27 16:20:14.390795 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 27 16:20:14.390968 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 16:20:14.391140 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 16:20:14.391341 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 16:20:14.391353 kernel: vgaarb: loaded Oct 27 16:20:14.391362 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 16:20:14.391371 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 16:20:14.391383 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 16:20:14.391392 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 16:20:14.391400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 16:20:14.391408 kernel: pnp: PnP ACPI init Oct 27 16:20:14.391595 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 27 16:20:14.391607 kernel: pnp: PnP ACPI: found 6 devices Oct 27 16:20:14.391619 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 16:20:14.391628 kernel: NET: Registered PF_INET protocol family Oct 27 16:20:14.391637 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 16:20:14.391645 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 16:20:14.391654 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 16:20:14.391662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 16:20:14.391671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 16:20:14.391682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 16:20:14.391690 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 16:20:14.391699 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 16:20:14.391707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 16:20:14.391716 kernel: NET: Registered PF_XDP protocol family Oct 27 16:20:14.391880 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 16:20:14.392045 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 16:20:14.392228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 16:20:14.392398 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 27 16:20:14.392559 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 27 16:20:14.392718 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 27 16:20:14.392729 kernel: PCI: CLS 0 bytes, default 64 Oct 27 16:20:14.392738 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 16:20:14.392747 kernel: Initialise system trusted keyrings Oct 27 16:20:14.392760 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 16:20:14.392769 kernel: Key type asymmetric registered Oct 27 16:20:14.392777 kernel: Asymmetric key parser 'x509' registered Oct 27 16:20:14.392786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 16:20:14.392794 kernel: io scheduler mq-deadline registered Oct 27 16:20:14.392803 kernel: io scheduler kyber registered Oct 27 16:20:14.392811 kernel: io scheduler bfq registered Oct 27 16:20:14.392822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 16:20:14.392831 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 16:20:14.392839 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 16:20:14.392848 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 27 16:20:14.392857 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 16:20:14.392865 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 16:20:14.392874 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 16:20:14.392884 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 16:20:14.392893 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 16:20:14.393073 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 27 16:20:14.393085 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 16:20:14.393276 kernel: rtc_cmos 00:04: registered as rtc0 Oct 27 16:20:14.393447 kernel: rtc_cmos 00:04: setting system clock to 2025-10-27T16:20:12 UTC (1761582012) Oct 27 16:20:14.393619 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 27 16:20:14.393631 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 16:20:14.393639 kernel: NET: Registered PF_INET6 protocol family Oct 27 16:20:14.393648 kernel: Segment Routing with IPv6 Oct 27 16:20:14.393656 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 16:20:14.393664 kernel: NET: Registered PF_PACKET protocol family Oct 27 16:20:14.393672 kernel: Key type dns_resolver registered Oct 27 16:20:14.393684 kernel: IPI shorthand broadcast: enabled Oct 27 16:20:14.393692 kernel: sched_clock: Marking stable (1229005379, 201410048)->(1476886850, -46471423) Oct 27 16:20:14.393701 kernel: registered taskstats version 1 Oct 27 16:20:14.393709 kernel: Loading compiled-in X.509 certificates Oct 27 16:20:14.393718 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 8564ea068298f514696abb31267a99edf4e953e3' Oct 27 16:20:14.393726 kernel: Demotion targets for Node 0: null Oct 27 16:20:14.393734 kernel: Key type .fscrypt registered Oct 27 16:20:14.393743 kernel: Key type fscrypt-provisioning registered Oct 27 16:20:14.393754 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 16:20:14.393762 kernel: ima: Allocated hash algorithm: sha1 Oct 27 16:20:14.393771 kernel: ima: No architecture policies found Oct 27 16:20:14.393779 kernel: clk: Disabling unused clocks Oct 27 16:20:14.393788 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 16:20:14.393796 kernel: Write protecting the kernel read-only data: 40960k Oct 27 16:20:14.393805 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 16:20:14.393816 kernel: Run /init as init process Oct 27 16:20:14.393824 kernel: with arguments: Oct 27 16:20:14.393832 kernel: /init Oct 27 16:20:14.393840 kernel: with environment: Oct 27 16:20:14.393848 kernel: HOME=/ Oct 27 16:20:14.393856 kernel: TERM=linux Oct 27 16:20:14.393865 kernel: SCSI subsystem initialized Oct 27 16:20:14.393875 kernel: libata version 3.00 loaded. Oct 27 16:20:14.394051 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 16:20:14.394081 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 16:20:14.394277 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 16:20:14.394455 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 16:20:14.394708 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 16:20:14.394910 kernel: scsi host0: ahci Oct 27 16:20:14.395098 kernel: scsi host1: ahci Oct 27 16:20:14.395311 kernel: scsi host2: ahci Oct 27 16:20:14.395499 kernel: scsi host3: ahci Oct 27 16:20:14.395685 kernel: scsi host4: ahci Oct 27 16:20:14.395879 kernel: scsi host5: ahci Oct 27 16:20:14.395893 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 27 16:20:14.395902 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 27 16:20:14.395910 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 27 16:20:14.395919 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 27 16:20:14.395928 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 27 16:20:14.395937 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 27 16:20:14.395949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 16:20:14.395958 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 16:20:14.395967 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 16:20:14.395976 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 16:20:14.395984 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 27 16:20:14.395993 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 16:20:14.396002 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 16:20:14.396013 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 16:20:14.396022 kernel: ata3.00: applying bridge limits Oct 27 16:20:14.396031 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 16:20:14.396039 kernel: ata3.00: configured for UDMA/100 Oct 27 16:20:14.396261 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 16:20:14.396464 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 27 16:20:14.396643 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 16:20:14.396655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 16:20:14.396665 kernel: GPT:16515071 != 27000831 Oct 27 16:20:14.396674 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 16:20:14.396682 kernel: GPT:16515071 != 27000831 Oct 27 16:20:14.396691 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 16:20:14.396699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 16:20:14.396711 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.396903 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 16:20:14.396915 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 16:20:14.397104 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 16:20:14.397116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 16:20:14.397124 kernel: device-mapper: uevent: version 1.0.3 Oct 27 16:20:14.397137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 16:20:14.397147 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 16:20:14.397158 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397166 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397175 kernel: raid6: avx2x4 gen() 29838 MB/s Oct 27 16:20:14.397186 kernel: raid6: avx2x2 gen() 30306 MB/s Oct 27 16:20:14.397211 kernel: raid6: avx2x1 gen() 25478 MB/s Oct 27 16:20:14.397220 kernel: raid6: using algorithm avx2x2 gen() 30306 MB/s Oct 27 16:20:14.397228 kernel: raid6: .... xor() 19790 MB/s, rmw enabled Oct 27 16:20:14.397237 kernel: raid6: using avx2x2 recovery algorithm Oct 27 16:20:14.397246 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397254 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397263 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397282 kernel: xor: automatically using best checksumming function avx Oct 27 16:20:14.397291 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397300 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 16:20:14.397308 kernel: BTRFS: device fsid 301257c1-1fa7-4024-bc0a-6c35fcbe5dcb devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (176) Oct 27 16:20:14.397317 kernel: BTRFS info (device dm-0): first mount of filesystem 301257c1-1fa7-4024-bc0a-6c35fcbe5dcb Oct 27 16:20:14.397346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 16:20:14.397356 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 16:20:14.397370 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 16:20:14.397389 kernel: Invalid ELF header magic: != \u007fELF Oct 27 16:20:14.397414 kernel: loop: module loaded Oct 27 16:20:14.397424 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 16:20:14.397433 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 16:20:14.397444 systemd[1]: Successfully made /usr/ read-only. Oct 27 16:20:14.397456 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 16:20:14.397469 systemd[1]: Detected virtualization kvm. Oct 27 16:20:14.397479 systemd[1]: Detected architecture x86-64. Oct 27 16:20:14.397487 systemd[1]: Running in initrd. Oct 27 16:20:14.397496 systemd[1]: No hostname configured, using default hostname. Oct 27 16:20:14.397506 systemd[1]: Hostname set to . Oct 27 16:20:14.397517 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 16:20:14.397527 systemd[1]: Queued start job for default target initrd.target. Oct 27 16:20:14.397536 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 16:20:14.397545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 16:20:14.397555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 16:20:14.397565 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 16:20:14.397574 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 16:20:14.397586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 16:20:14.397596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 16:20:14.397605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 16:20:14.397615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 16:20:14.397625 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 16:20:14.397634 systemd[1]: Reached target paths.target - Path Units. Oct 27 16:20:14.397645 systemd[1]: Reached target slices.target - Slice Units. Oct 27 16:20:14.397655 systemd[1]: Reached target swap.target - Swaps. Oct 27 16:20:14.397664 systemd[1]: Reached target timers.target - Timer Units. Oct 27 16:20:14.397673 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 16:20:14.397682 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 16:20:14.397692 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 16:20:14.397701 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 16:20:14.397713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 16:20:14.397722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 16:20:14.397731 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 16:20:14.397741 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 16:20:14.397750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 16:20:14.397759 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 16:20:14.397771 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 16:20:14.397780 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 16:20:14.397790 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 16:20:14.397799 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 16:20:14.397809 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 16:20:14.397818 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 16:20:14.397827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 16:20:14.397839 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 16:20:14.397849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 16:20:14.397858 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 16:20:14.397868 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 16:20:14.397914 systemd-journald[310]: Collecting audit messages is disabled. Oct 27 16:20:14.397935 systemd-journald[310]: Journal started Oct 27 16:20:14.397956 systemd-journald[310]: Runtime Journal (/run/log/journal/c40e61cdce0e4b2d9359c6427a7797ae) is 6M, max 48.3M, 42.2M free. Oct 27 16:20:14.401229 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 16:20:14.403432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 16:20:14.409645 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 16:20:14.415567 systemd-modules-load[312]: Inserted module 'br_netfilter' Oct 27 16:20:14.416837 kernel: Bridge firewalling registered Oct 27 16:20:14.416349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 16:20:14.418231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 16:20:14.418801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 16:20:14.421754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 16:20:14.440375 systemd-tmpfiles[325]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 16:20:14.440545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 16:20:14.508120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 16:20:14.512028 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 16:20:14.515777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 16:20:14.521767 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 16:20:14.527613 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 16:20:14.554748 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 16:20:14.560480 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 16:20:14.580698 systemd-resolved[342]: Positive Trust Anchors: Oct 27 16:20:14.580711 systemd-resolved[342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 16:20:14.580716 systemd-resolved[342]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 16:20:14.580750 systemd-resolved[342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 16:20:14.636556 systemd-resolved[342]: Defaulting to hostname 'linux'. Oct 27 16:20:14.638010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 16:20:14.639966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 16:20:14.661730 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=37c8c81d19decb4eff6c5d5b00a7b1383969f269a46eefe3981c74c9d47fcf7b Oct 27 16:20:14.774235 kernel: Loading iSCSI transport class v2.0-870. Oct 27 16:20:14.788266 kernel: iscsi: registered transport (tcp) Oct 27 16:20:14.813712 kernel: iscsi: registered transport (qla4xxx) Oct 27 16:20:14.813749 kernel: QLogic iSCSI HBA Driver Oct 27 16:20:14.844503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 16:20:14.880983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 16:20:14.883816 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 16:20:14.945759 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 16:20:14.948181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 16:20:14.952370 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 16:20:14.995140 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 16:20:14.998711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 16:20:15.032493 systemd-udevd[591]: Using default interface naming scheme 'v257'. Oct 27 16:20:15.046673 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 16:20:15.053413 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 16:20:15.087166 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Oct 27 16:20:15.101447 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 16:20:15.108120 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 16:20:15.137787 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 16:20:15.140540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 16:20:15.197104 systemd-networkd[712]: lo: Link UP Oct 27 16:20:15.197114 systemd-networkd[712]: lo: Gained carrier Oct 27 16:20:15.197856 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 16:20:15.200305 systemd[1]: Reached target network.target - Network. Oct 27 16:20:15.260920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 16:20:15.267319 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 16:20:15.321080 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 16:20:15.347188 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 16:20:15.371031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 16:20:15.392812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 16:20:15.454621 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 16:20:15.461222 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 16:20:15.464504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 16:20:15.466896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 16:20:15.473259 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 16:20:15.481227 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 16:20:15.482437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 16:20:15.483577 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 16:20:15.483585 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 16:20:15.493412 disk-uuid[769]: Primary Header is updated. Oct 27 16:20:15.493412 disk-uuid[769]: Secondary Entries is updated. Oct 27 16:20:15.493412 disk-uuid[769]: Secondary Header is updated. Oct 27 16:20:15.501019 kernel: AES CTR mode by8 optimization enabled Oct 27 16:20:15.484096 systemd-networkd[712]: eth0: Link UP Oct 27 16:20:15.486347 systemd-networkd[712]: eth0: Gained carrier Oct 27 16:20:15.486365 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 16:20:15.514285 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 16:20:15.606871 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 16:20:15.638473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 16:20:15.641866 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 16:20:15.644155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 16:20:15.646101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 16:20:15.650923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 16:20:15.690545 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 16:20:16.561690 disk-uuid[770]: Warning: The kernel is still using the old partition table. Oct 27 16:20:16.561690 disk-uuid[770]: The new table will be used at the next reboot or after you Oct 27 16:20:16.561690 disk-uuid[770]: run partprobe(8) or kpartx(8) Oct 27 16:20:16.561690 disk-uuid[770]: The operation has completed successfully. Oct 27 16:20:16.575745 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 16:20:16.575912 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 16:20:16.577820 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 16:20:16.611239 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Oct 27 16:20:16.614902 kernel: BTRFS info (device vda6): first mount of filesystem 4d955b30-d972-403c-bfd0-05b2cc9e7d25 Oct 27 16:20:16.614923 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 16:20:16.618605 kernel: BTRFS info (device vda6): turning on async discard Oct 27 16:20:16.618623 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 16:20:16.626232 kernel: BTRFS info (device vda6): last unmount of filesystem 4d955b30-d972-403c-bfd0-05b2cc9e7d25 Oct 27 16:20:16.627679 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 16:20:16.630070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 16:20:16.884460 ignition[883]: Ignition 2.22.0 Oct 27 16:20:16.884473 ignition[883]: Stage: fetch-offline Oct 27 16:20:16.884526 ignition[883]: no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:16.884539 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:16.884640 ignition[883]: parsed url from cmdline: "" Oct 27 16:20:16.884644 ignition[883]: no config URL provided Oct 27 16:20:16.884651 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 16:20:16.884665 ignition[883]: no config at "/usr/lib/ignition/user.ign" Oct 27 16:20:16.884712 ignition[883]: op(1): [started] loading QEMU firmware config module Oct 27 16:20:16.884716 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 16:20:16.897305 ignition[883]: op(1): [finished] loading QEMU firmware config module Oct 27 16:20:16.975004 ignition[883]: parsing config with SHA512: fe2ad0449262c40e8402197f30d3e19f755ad63f1f178ab2f80b542a108fe4f1ee84b0735faaaec6dad2f00b6a0b72c11c7bfaa191552116b6e9e5b56fc4c781 Oct 27 16:20:16.982749 unknown[883]: fetched base config from "system" Oct 27 16:20:16.982761 unknown[883]: fetched user config from "qemu" Oct 27 16:20:16.983105 ignition[883]: fetch-offline: fetch-offline passed Oct 27 16:20:16.983170 ignition[883]: Ignition finished successfully Oct 27 16:20:16.991010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 16:20:16.994910 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 16:20:16.998164 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 16:20:17.060067 ignition[896]: Ignition 2.22.0 Oct 27 16:20:17.060080 ignition[896]: Stage: kargs Oct 27 16:20:17.060276 ignition[896]: no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:17.060288 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:17.066162 ignition[896]: kargs: kargs passed Oct 27 16:20:17.067214 ignition[896]: Ignition finished successfully Oct 27 16:20:17.071303 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 16:20:17.075439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 16:20:17.116702 ignition[904]: Ignition 2.22.0 Oct 27 16:20:17.116716 ignition[904]: Stage: disks Oct 27 16:20:17.116846 ignition[904]: no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:17.116857 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:17.119994 ignition[904]: disks: disks passed Oct 27 16:20:17.120044 ignition[904]: Ignition finished successfully Oct 27 16:20:17.126892 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 16:20:17.127771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 16:20:17.128111 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 16:20:17.133980 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 16:20:17.138695 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 16:20:17.141849 systemd[1]: Reached target basic.target - Basic System. Oct 27 16:20:17.145899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 16:20:17.179347 systemd-networkd[712]: eth0: Gained IPv6LL Oct 27 16:20:17.195075 systemd-fsck[914]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 16:20:17.202955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 16:20:17.205168 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 16:20:17.400222 kernel: EXT4-fs (vda9): mounted filesystem 65545207-aa71-4cd4-8d34-c558bd3ddef0 r/w with ordered data mode. Quota mode: none. Oct 27 16:20:17.400747 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 16:20:17.402278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 16:20:17.405109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 16:20:17.408469 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 16:20:17.410273 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 16:20:17.410313 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 16:20:17.410337 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 16:20:17.519829 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 16:20:17.527684 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Oct 27 16:20:17.527709 kernel: BTRFS info (device vda6): first mount of filesystem 4d955b30-d972-403c-bfd0-05b2cc9e7d25 Oct 27 16:20:17.527725 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 16:20:17.526010 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 16:20:17.533581 kernel: BTRFS info (device vda6): turning on async discard Oct 27 16:20:17.533604 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 16:20:17.534782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 16:20:17.576867 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 16:20:17.582776 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Oct 27 16:20:17.586605 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 16:20:17.591138 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 16:20:17.714635 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 16:20:17.717860 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 16:20:17.720383 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 16:20:17.767855 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 16:20:17.770343 kernel: BTRFS info (device vda6): last unmount of filesystem 4d955b30-d972-403c-bfd0-05b2cc9e7d25 Oct 27 16:20:17.785330 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 16:20:17.920134 ignition[1037]: INFO : Ignition 2.22.0 Oct 27 16:20:17.920134 ignition[1037]: INFO : Stage: mount Oct 27 16:20:17.922848 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:17.922848 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:17.926759 ignition[1037]: INFO : mount: mount passed Oct 27 16:20:17.927973 ignition[1037]: INFO : Ignition finished successfully Oct 27 16:20:17.931970 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 16:20:17.933809 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 16:20:17.958723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 16:20:17.989813 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1049) Oct 27 16:20:17.989849 kernel: BTRFS info (device vda6): first mount of filesystem 4d955b30-d972-403c-bfd0-05b2cc9e7d25 Oct 27 16:20:17.989862 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 16:20:17.995559 kernel: BTRFS info (device vda6): turning on async discard Oct 27 16:20:17.995605 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 16:20:17.997335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 16:20:18.049226 ignition[1066]: INFO : Ignition 2.22.0 Oct 27 16:20:18.049226 ignition[1066]: INFO : Stage: files Oct 27 16:20:18.052283 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:18.052283 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:18.052283 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Oct 27 16:20:18.052283 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 16:20:18.052283 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 16:20:18.063357 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 16:20:18.065784 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 16:20:18.068438 unknown[1066]: wrote ssh authorized keys file for user: core Oct 27 16:20:18.070130 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 16:20:18.074089 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 16:20:18.077460 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 16:20:18.124075 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 16:20:18.181316 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 16:20:18.184842 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 16:20:18.207473 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 27 16:20:18.718092 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 16:20:19.636900 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 16:20:19.636900 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 16:20:19.643335 ignition[1066]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 16:20:19.666620 ignition[1066]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 16:20:19.669655 ignition[1066]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 16:20:19.672302 ignition[1066]: INFO : files: files passed Oct 27 16:20:19.672302 ignition[1066]: INFO : Ignition finished successfully Oct 27 16:20:19.677547 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 16:20:19.687859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 16:20:19.694320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 16:20:19.705664 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 16:20:19.705787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 16:20:19.714225 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 16:20:19.718507 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 16:20:19.721165 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 16:20:19.721962 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 16:20:19.721464 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 16:20:19.722724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 16:20:19.731973 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 16:20:19.788843 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 16:20:19.788970 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 16:20:19.789988 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 16:20:19.794782 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 16:20:19.800628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 16:20:19.803152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 16:20:19.836891 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 16:20:19.840188 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 16:20:19.863307 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 16:20:19.863520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 16:20:19.867131 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 16:20:19.868026 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 16:20:19.872995 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 16:20:19.873119 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 16:20:19.878535 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 16:20:19.881857 systemd[1]: Stopped target basic.target - Basic System. Oct 27 16:20:19.884784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 16:20:19.887837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 16:20:19.891302 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 16:20:19.892165 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 16:20:19.896966 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 16:20:19.900636 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 16:20:19.906669 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 16:20:19.907643 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 16:20:19.910729 systemd[1]: Stopped target swap.target - Swaps. Oct 27 16:20:19.913725 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 16:20:19.913836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 16:20:19.918447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 16:20:19.921719 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 16:20:19.925224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 16:20:19.925544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 16:20:19.926095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 16:20:19.926228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 16:20:19.934208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 16:20:19.934324 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 16:20:19.937565 systemd[1]: Stopped target paths.target - Path Units. Oct 27 16:20:19.940517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 16:20:19.945253 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 16:20:19.945936 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 16:20:19.949753 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 16:20:19.953174 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 16:20:19.953280 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 16:20:19.955940 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 16:20:19.956021 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 16:20:19.958790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 16:20:19.958901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 16:20:19.961612 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 16:20:19.961719 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 16:20:19.967276 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 16:20:19.972015 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 16:20:19.975336 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 16:20:19.975515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 16:20:19.976350 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 16:20:19.976459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 16:20:19.980774 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 16:20:19.980878 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 16:20:19.992964 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 16:20:19.993075 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 16:20:20.015923 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 16:20:20.033838 ignition[1123]: INFO : Ignition 2.22.0 Oct 27 16:20:20.033838 ignition[1123]: INFO : Stage: umount Oct 27 16:20:20.036651 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 16:20:20.036651 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 16:20:20.036651 ignition[1123]: INFO : umount: umount passed Oct 27 16:20:20.036651 ignition[1123]: INFO : Ignition finished successfully Oct 27 16:20:20.036986 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 16:20:20.037128 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 16:20:20.041866 systemd[1]: Stopped target network.target - Network. Oct 27 16:20:20.044378 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 16:20:20.044446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 16:20:20.045560 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 16:20:20.045613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 16:20:20.051744 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 16:20:20.051803 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 16:20:20.054589 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 16:20:20.054642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 16:20:20.055833 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 16:20:20.059813 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 16:20:20.083570 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 16:20:20.084319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 16:20:20.091538 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 16:20:20.091768 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 16:20:20.099392 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 16:20:20.099613 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 16:20:20.103180 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 16:20:20.105032 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 16:20:20.105346 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 16:20:20.108913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 16:20:20.108972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 16:20:20.115927 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 16:20:20.118849 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 16:20:20.118969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 16:20:20.119925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 16:20:20.119974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 16:20:20.125696 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 16:20:20.125780 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 16:20:20.128919 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 16:20:20.152052 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 16:20:20.152313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 16:20:20.153905 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 16:20:20.154005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 16:20:20.158837 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 16:20:20.158881 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 16:20:20.159345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 16:20:20.159393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 16:20:20.168357 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 16:20:20.168441 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 16:20:20.173141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 16:20:20.173223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 16:20:20.178780 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 16:20:20.179996 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 16:20:20.180056 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 16:20:20.183567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 16:20:20.183625 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 16:20:20.187640 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 16:20:20.187694 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 16:20:20.191085 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 16:20:20.191148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 16:20:20.194672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 16:20:20.194727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 16:20:20.201620 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 16:20:20.201732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 16:20:20.235728 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 16:20:20.235866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 16:20:20.239013 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 16:20:20.242703 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 16:20:20.253785 systemd[1]: Switching root. Oct 27 16:20:20.301772 systemd-journald[310]: Journal stopped Oct 27 16:20:21.820251 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Oct 27 16:20:21.820322 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 16:20:21.820336 kernel: SELinux: policy capability open_perms=1 Oct 27 16:20:21.820354 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 16:20:21.820375 kernel: SELinux: policy capability always_check_network=0 Oct 27 16:20:21.820388 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 16:20:21.820400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 16:20:21.820418 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 16:20:21.820431 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 16:20:21.820443 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 16:20:21.820455 kernel: audit: type=1403 audit(1761582020.816:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 16:20:21.820472 systemd[1]: Successfully loaded SELinux policy in 70.282ms. Oct 27 16:20:21.820498 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.006ms. Oct 27 16:20:21.820512 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 16:20:21.820526 systemd[1]: Detected virtualization kvm. Oct 27 16:20:21.820539 systemd[1]: Detected architecture x86-64. Oct 27 16:20:21.820552 systemd[1]: Detected first boot. Oct 27 16:20:21.820565 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 16:20:21.820580 kernel: Guest personality initialized and is inactive Oct 27 16:20:21.820593 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 16:20:21.820610 kernel: Initialized host personality Oct 27 16:20:21.820622 zram_generator::config[1170]: No configuration found. Oct 27 16:20:21.820637 kernel: NET: Registered PF_VSOCK protocol family Oct 27 16:20:21.820653 systemd[1]: Populated /etc with preset unit settings. Oct 27 16:20:21.820672 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 16:20:21.820685 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 16:20:21.820698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 16:20:21.820713 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 16:20:21.820726 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 16:20:21.820739 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 16:20:21.820752 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 16:20:21.820769 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 16:20:21.820783 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 16:20:21.820797 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 16:20:21.820810 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 16:20:21.820823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 16:20:21.820836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 16:20:21.820849 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 16:20:21.820865 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 16:20:21.820878 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 16:20:21.820894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 16:20:21.820909 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 16:20:21.820923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 16:20:21.820936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 16:20:21.820955 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 16:20:21.820968 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 16:20:21.820986 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 16:20:21.820999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 16:20:21.821012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 16:20:21.821025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 16:20:21.821038 systemd[1]: Reached target slices.target - Slice Units. Oct 27 16:20:21.821056 systemd[1]: Reached target swap.target - Swaps. Oct 27 16:20:21.821087 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 16:20:21.821104 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 16:20:21.821120 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 16:20:21.821137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 16:20:21.821150 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 16:20:21.821163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 16:20:21.821176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 16:20:21.821274 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 16:20:21.821289 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 16:20:21.821303 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 16:20:21.821316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:21.821329 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 16:20:21.821342 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 16:20:21.821355 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 16:20:21.821372 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 16:20:21.821386 systemd[1]: Reached target machines.target - Containers. Oct 27 16:20:21.821399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 16:20:21.821413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 16:20:21.821426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 16:20:21.821438 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 16:20:21.821454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 16:20:21.821467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 16:20:21.821480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 16:20:21.821493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 16:20:21.821506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 16:20:21.821519 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 16:20:21.821532 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 16:20:21.821552 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 16:20:21.821565 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 16:20:21.821577 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 16:20:21.821591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 16:20:21.821604 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 16:20:21.821616 kernel: fuse: init (API version 7.41) Oct 27 16:20:21.821629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 16:20:21.821649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 16:20:21.821663 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 16:20:21.821676 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 16:20:21.821689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 16:20:21.821707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:21.821721 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 16:20:21.821734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 16:20:21.821746 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 16:20:21.821759 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 16:20:21.821772 kernel: ACPI: bus type drm_connector registered Oct 27 16:20:21.821784 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 16:20:21.821802 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 16:20:21.821816 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 16:20:21.821829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 16:20:21.821842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 16:20:21.821855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 16:20:21.821886 systemd-journald[1252]: Collecting audit messages is disabled. Oct 27 16:20:21.821916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 16:20:21.821929 systemd-journald[1252]: Journal started Oct 27 16:20:21.821951 systemd-journald[1252]: Runtime Journal (/run/log/journal/c40e61cdce0e4b2d9359c6427a7797ae) is 6M, max 48.3M, 42.2M free. Oct 27 16:20:21.449250 systemd[1]: Queued start job for default target multi-user.target. Oct 27 16:20:21.469431 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 16:20:21.470126 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 16:20:21.823843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 16:20:21.827491 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 16:20:21.830271 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 16:20:21.830509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 16:20:21.832512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 16:20:21.832727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 16:20:21.834935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 16:20:21.835169 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 16:20:21.837187 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 16:20:21.837419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 16:20:21.839680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 16:20:21.841884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 16:20:21.845041 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 16:20:21.847871 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 16:20:21.864879 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 16:20:21.867384 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 16:20:21.869413 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 16:20:21.869446 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 16:20:21.872045 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 16:20:21.874239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 16:20:21.875881 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 16:20:21.878585 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 16:20:21.880463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 16:20:21.882466 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 16:20:21.884681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 16:20:21.894425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 16:20:21.898327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 16:20:21.901493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 16:20:21.904313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 16:20:21.905456 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 16:20:21.908318 systemd-journald[1252]: Time spent on flushing to /var/log/journal/c40e61cdce0e4b2d9359c6427a7797ae is 24.436ms for 974 entries. Oct 27 16:20:21.908318 systemd-journald[1252]: System Journal (/var/log/journal/c40e61cdce0e4b2d9359c6427a7797ae) is 8M, max 163.5M, 155.5M free. Oct 27 16:20:21.958429 systemd-journald[1252]: Received client request to flush runtime journal. Oct 27 16:20:21.958493 kernel: loop1: detected capacity change from 0 to 110984 Oct 27 16:20:21.911822 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 16:20:21.918393 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 16:20:21.935009 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 27 16:20:21.935023 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 27 16:20:21.938877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 16:20:21.941844 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 16:20:21.946522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 16:20:21.960655 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 16:20:21.972827 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 16:20:21.979238 kernel: loop2: detected capacity change from 0 to 219144 Oct 27 16:20:21.989018 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 16:20:21.993435 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 16:20:21.996287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 16:20:22.005227 kernel: loop3: detected capacity change from 0 to 118328 Oct 27 16:20:22.012852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 16:20:22.028472 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Oct 27 16:20:22.028500 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Oct 27 16:20:22.034302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 16:20:22.037246 kernel: loop4: detected capacity change from 0 to 110984 Oct 27 16:20:22.049226 kernel: loop5: detected capacity change from 0 to 219144 Oct 27 16:20:22.058233 kernel: loop6: detected capacity change from 0 to 118328 Oct 27 16:20:22.065706 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 16:20:22.071278 (sd-merge)[1311]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 16:20:22.075236 (sd-merge)[1311]: Merged extensions into '/usr'. Oct 27 16:20:22.079956 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 16:20:22.079977 systemd[1]: Reloading... Oct 27 16:20:22.147264 zram_generator::config[1344]: No configuration found. Oct 27 16:20:22.154943 systemd-resolved[1305]: Positive Trust Anchors: Oct 27 16:20:22.155405 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 16:20:22.155455 systemd-resolved[1305]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 16:20:22.155528 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 16:20:22.159931 systemd-resolved[1305]: Defaulting to hostname 'linux'. Oct 27 16:20:22.348078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 16:20:22.348294 systemd[1]: Reloading finished in 267 ms. Oct 27 16:20:22.372425 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 16:20:22.376538 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 16:20:22.379810 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 16:20:22.395648 systemd[1]: Starting ensure-sysext.service... Oct 27 16:20:22.398311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 16:20:22.416647 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 16:20:22.416686 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 16:20:22.416991 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 16:20:22.417310 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 16:20:22.418289 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 16:20:22.418562 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 27 16:20:22.418639 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 27 16:20:22.424460 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 16:20:22.424472 systemd-tmpfiles[1382]: Skipping /boot Oct 27 16:20:22.429569 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Oct 27 16:20:22.429758 systemd[1]: Reloading... Oct 27 16:20:22.435513 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 16:20:22.435595 systemd-tmpfiles[1382]: Skipping /boot Oct 27 16:20:22.511241 zram_generator::config[1415]: No configuration found. Oct 27 16:20:22.699918 systemd[1]: Reloading finished in 269 ms. Oct 27 16:20:22.721153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 16:20:22.748858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 16:20:22.764804 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 16:20:22.767938 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 16:20:22.790245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 16:20:22.793152 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 16:20:22.795378 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 16:20:22.798860 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 16:20:22.802511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 16:20:22.806747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 16:20:22.812528 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 16:20:22.815891 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 16:20:22.824878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.825701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 16:20:22.828804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 16:20:22.835507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 16:20:22.841506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 16:20:22.843348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 16:20:22.843448 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 16:20:22.843550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.845671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 16:20:22.845898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 16:20:22.848747 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 16:20:22.849305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 16:20:22.851183 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Oct 27 16:20:22.859939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.860164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 16:20:22.864293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 16:20:22.867622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 16:20:22.869384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 16:20:22.869605 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 16:20:22.869808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.875403 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 16:20:22.878668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 16:20:22.878950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 16:20:22.881495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 16:20:22.881717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 16:20:22.884126 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 16:20:22.884377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 16:20:22.889831 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 16:20:22.897527 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.897768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 16:20:22.899120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 16:20:22.901892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 16:20:22.917637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 16:20:22.920984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 16:20:22.922750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 16:20:22.922875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 16:20:22.923077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 16:20:22.924717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 16:20:22.924957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 16:20:22.929904 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 16:20:22.930404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 16:20:22.933444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 16:20:22.933692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 16:20:22.936745 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 16:20:22.937147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 16:20:22.942902 systemd[1]: Finished ensure-sysext.service. Oct 27 16:20:22.947920 augenrules[1498]: No rules Oct 27 16:20:22.950070 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 16:20:22.950468 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 16:20:22.953279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 16:20:22.953382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 16:20:22.956435 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 16:20:22.965329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 16:20:22.971865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 16:20:23.005271 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 16:20:23.015462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 16:20:23.033242 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 16:20:23.070152 systemd-networkd[1519]: lo: Link UP Oct 27 16:20:23.071048 systemd-networkd[1519]: lo: Gained carrier Oct 27 16:20:23.079085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 16:20:23.083130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 16:20:23.085176 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 16:20:23.087390 systemd[1]: Reached target network.target - Network. Oct 27 16:20:23.095982 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 16:20:23.099468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 16:20:23.112510 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 16:20:23.114302 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 16:20:23.120290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 16:20:23.121218 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 16:20:23.128362 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 16:20:23.128373 systemd-networkd[1519]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 16:20:23.129116 systemd-networkd[1519]: eth0: Link UP Oct 27 16:20:23.129613 systemd-networkd[1519]: eth0: Gained carrier Oct 27 16:20:23.129634 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 16:20:23.142766 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 16:20:23.145362 systemd-networkd[1519]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 16:20:23.146790 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Oct 27 16:20:23.785825 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 16:20:23.785877 systemd-timesyncd[1508]: Initial clock synchronization to Mon 2025-10-27 16:20:23.785745 UTC. Oct 27 16:20:23.785941 systemd-resolved[1305]: Clock change detected. Flushing caches. Oct 27 16:20:23.791227 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 16:20:23.808688 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 16:20:23.809338 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 16:20:23.812271 kernel: ACPI: button: Power Button [PWRF] Oct 27 16:20:23.935910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 16:20:23.980111 kernel: kvm_amd: TSC scaling supported Oct 27 16:20:23.980177 kernel: kvm_amd: Nested Virtualization enabled Oct 27 16:20:23.980193 kernel: kvm_amd: Nested Paging enabled Oct 27 16:20:23.980205 kernel: kvm_amd: LBR virtualization supported Oct 27 16:20:23.981882 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 27 16:20:23.981925 kernel: kvm_amd: Virtual GIF supported Oct 27 16:20:24.038208 kernel: EDAC MC: Ver: 3.0.0 Oct 27 16:20:24.059934 ldconfig[1455]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 16:20:24.067031 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 16:20:24.124394 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 16:20:24.126683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 16:20:24.152792 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 16:20:24.154820 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 16:20:24.156647 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 16:20:24.158665 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 16:20:24.160744 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 16:20:24.162765 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 16:20:24.164629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 16:20:24.166659 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 16:20:24.168664 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 16:20:24.168695 systemd[1]: Reached target paths.target - Path Units. Oct 27 16:20:24.170149 systemd[1]: Reached target timers.target - Timer Units. Oct 27 16:20:24.172458 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 16:20:24.175681 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 16:20:24.179520 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 16:20:24.181703 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 16:20:24.183757 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 16:20:24.188318 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 16:20:24.190325 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 16:20:24.192822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 16:20:24.195287 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 16:20:24.196851 systemd[1]: Reached target basic.target - Basic System. Oct 27 16:20:24.198557 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 16:20:24.198587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 16:20:24.199684 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 16:20:24.202450 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 16:20:24.204936 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 16:20:24.212548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 16:20:24.215883 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 16:20:24.216533 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 16:20:24.218294 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 16:20:24.222080 jq[1582]: false Oct 27 16:20:24.223008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 16:20:24.226831 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 16:20:24.229532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 16:20:24.230894 oslogin_cache_refresh[1584]: Refreshing passwd entry cache Oct 27 16:20:24.232444 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing passwd entry cache Oct 27 16:20:24.233281 extend-filesystems[1583]: Found /dev/vda6 Oct 27 16:20:24.238789 extend-filesystems[1583]: Found /dev/vda9 Oct 27 16:20:24.238767 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 16:20:24.241503 extend-filesystems[1583]: Checking size of /dev/vda9 Oct 27 16:20:24.243234 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting users, quitting Oct 27 16:20:24.243228 oslogin_cache_refresh[1584]: Failure getting users, quitting Oct 27 16:20:24.243441 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 16:20:24.243441 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing group entry cache Oct 27 16:20:24.243253 oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 16:20:24.243306 oslogin_cache_refresh[1584]: Refreshing group entry cache Oct 27 16:20:24.245402 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 16:20:24.247132 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 16:20:24.247704 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 16:20:24.248689 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 16:20:24.251609 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting groups, quitting Oct 27 16:20:24.251609 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 16:20:24.251579 oslogin_cache_refresh[1584]: Failure getting groups, quitting Oct 27 16:20:24.251590 oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 16:20:24.256077 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 16:20:24.261431 extend-filesystems[1583]: Resized partition /dev/vda9 Oct 27 16:20:24.262662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 16:20:24.265403 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 16:20:24.267337 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 16:20:24.267682 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 16:20:24.267926 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 16:20:24.279859 update_engine[1598]: I20251027 16:20:24.279774 1598 main.cc:92] Flatcar Update Engine starting Oct 27 16:20:24.410071 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 16:20:24.412783 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 16:20:24.415556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 16:20:24.422130 jq[1604]: true Oct 27 16:20:24.437240 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 16:20:24.425815 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 16:20:24.426091 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 16:20:24.448199 jq[1620]: true Oct 27 16:20:24.472175 tar[1617]: linux-amd64/LICENSE Oct 27 16:20:24.479692 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 16:20:24.512732 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 16:20:24.512766 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 16:20:24.513141 systemd-logind[1595]: New seat seat0. Oct 27 16:20:24.513343 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 16:20:24.513343 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 16:20:24.513343 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 16:20:24.527333 extend-filesystems[1583]: Resized filesystem in /dev/vda9 Oct 27 16:20:24.529067 tar[1617]: linux-amd64/helm Oct 27 16:20:24.514918 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 16:20:24.515384 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 16:20:24.522646 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 16:20:24.532346 dbus-daemon[1580]: [system] SELinux support is enabled Oct 27 16:20:24.532601 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 16:20:24.540106 bash[1647]: Updated "/home/core/.ssh/authorized_keys" Oct 27 16:20:24.540318 update_engine[1598]: I20251027 16:20:24.539934 1598 update_check_scheduler.cc:74] Next update check in 7m11s Oct 27 16:20:24.544248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 16:20:24.549426 systemd[1]: Started update-engine.service - Update Engine. Oct 27 16:20:24.550204 dbus-daemon[1580]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 27 16:20:24.552994 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 16:20:24.553078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 16:20:24.553102 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 16:20:24.556189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 16:20:24.556348 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 16:20:24.563300 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 16:20:24.710974 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 16:20:24.774129 locksmithd[1651]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 16:20:24.794964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 16:20:24.799645 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 16:20:24.829931 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 16:20:24.830229 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 16:20:24.835527 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 16:20:24.865733 systemd-networkd[1519]: eth0: Gained IPv6LL Oct 27 16:20:24.904626 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 16:20:24.907671 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 16:20:24.912360 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 16:20:24.918342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:24.926378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 16:20:24.929008 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 16:20:24.940531 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 16:20:24.944270 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 16:20:24.946485 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 16:20:24.958041 containerd[1621]: time="2025-10-27T16:20:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 16:20:24.961652 containerd[1621]: time="2025-10-27T16:20:24.961593087Z" level=info msg="starting containerd" revision=cb1076646aa3740577fafbf3d914198b7fe8e3f7 version=v2.1.4 Oct 27 16:20:24.965473 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 16:20:24.965804 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 16:20:24.968889 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 16:20:24.972722 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000565003Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.327µs" Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000616029Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000679638Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000691060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000903949Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000918767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.000997054Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 16:20:25.000991 containerd[1621]: time="2025-10-27T16:20:25.001008645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001353 containerd[1621]: time="2025-10-27T16:20:25.001323285Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001353 containerd[1621]: time="2025-10-27T16:20:25.001347140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001404 containerd[1621]: time="2025-10-27T16:20:25.001358221Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001404 containerd[1621]: time="2025-10-27T16:20:25.001366917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001581 containerd[1621]: time="2025-10-27T16:20:25.001555531Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001581 containerd[1621]: time="2025-10-27T16:20:25.001575819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 16:20:25.001709 containerd[1621]: time="2025-10-27T16:20:25.001683531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.002014 containerd[1621]: time="2025-10-27T16:20:25.001986159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.002045 containerd[1621]: time="2025-10-27T16:20:25.002033107Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 16:20:25.002066 containerd[1621]: time="2025-10-27T16:20:25.002044698Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 16:20:25.003194 containerd[1621]: time="2025-10-27T16:20:25.002096606Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 16:20:25.003194 containerd[1621]: time="2025-10-27T16:20:25.002405375Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 16:20:25.003194 containerd[1621]: time="2025-10-27T16:20:25.002534968Z" level=info msg="metadata content store policy set" policy=shared Oct 27 16:20:25.008365 containerd[1621]: time="2025-10-27T16:20:25.008327020Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 16:20:25.008406 containerd[1621]: time="2025-10-27T16:20:25.008381723Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 27 16:20:25.008519 containerd[1621]: time="2025-10-27T16:20:25.008485557Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 27 16:20:25.008519 containerd[1621]: time="2025-10-27T16:20:25.008503020Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 16:20:25.008519 containerd[1621]: time="2025-10-27T16:20:25.008516575Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 16:20:25.008578 containerd[1621]: time="2025-10-27T16:20:25.008528448Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 16:20:25.008578 containerd[1621]: time="2025-10-27T16:20:25.008540220Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 16:20:25.008578 containerd[1621]: time="2025-10-27T16:20:25.008550078Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 16:20:25.008578 containerd[1621]: time="2025-10-27T16:20:25.008561389Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 16:20:25.008578 containerd[1621]: time="2025-10-27T16:20:25.008572550Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 16:20:25.008680 containerd[1621]: time="2025-10-27T16:20:25.008583381Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 16:20:25.008680 containerd[1621]: time="2025-10-27T16:20:25.008634446Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 16:20:25.008680 containerd[1621]: time="2025-10-27T16:20:25.008644495Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 16:20:25.008680 containerd[1621]: time="2025-10-27T16:20:25.008657951Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 16:20:25.008994 containerd[1621]: time="2025-10-27T16:20:25.008957252Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 16:20:25.008994 containerd[1621]: time="2025-10-27T16:20:25.008986316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 16:20:25.009040 containerd[1621]: time="2025-10-27T16:20:25.009001094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 16:20:25.009040 containerd[1621]: time="2025-10-27T16:20:25.009012195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 16:20:25.009040 containerd[1621]: time="2025-10-27T16:20:25.009023185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 16:20:25.009040 containerd[1621]: time="2025-10-27T16:20:25.009034557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 16:20:25.009128 containerd[1621]: time="2025-10-27T16:20:25.009046449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 16:20:25.009128 containerd[1621]: time="2025-10-27T16:20:25.009061487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 16:20:25.009128 containerd[1621]: time="2025-10-27T16:20:25.009085302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 16:20:25.009128 containerd[1621]: time="2025-10-27T16:20:25.009097294Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 16:20:25.009128 containerd[1621]: time="2025-10-27T16:20:25.009108095Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 16:20:25.009242 containerd[1621]: time="2025-10-27T16:20:25.009139554Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 16:20:25.009242 containerd[1621]: time="2025-10-27T16:20:25.009230384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 16:20:25.009282 containerd[1621]: time="2025-10-27T16:20:25.009247526Z" level=info msg="Start snapshots syncer" Oct 27 16:20:25.009335 containerd[1621]: time="2025-10-27T16:20:25.009310043Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 16:20:25.009828 containerd[1621]: time="2025-10-27T16:20:25.009762612Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 16:20:25.009944 containerd[1621]: time="2025-10-27T16:20:25.009839406Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 16:20:25.010034 containerd[1621]: time="2025-10-27T16:20:25.010007291Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 16:20:25.010214 containerd[1621]: time="2025-10-27T16:20:25.010188351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 16:20:25.010214 containerd[1621]: time="2025-10-27T16:20:25.010212997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 16:20:25.010274 containerd[1621]: time="2025-10-27T16:20:25.010223777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 16:20:25.010274 containerd[1621]: time="2025-10-27T16:20:25.010233686Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 16:20:25.010274 containerd[1621]: time="2025-10-27T16:20:25.010246179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 16:20:25.010274 containerd[1621]: time="2025-10-27T16:20:25.010257641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 16:20:25.010274 containerd[1621]: time="2025-10-27T16:20:25.010268130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 16:20:25.010372 containerd[1621]: time="2025-10-27T16:20:25.010281235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 16:20:25.010372 containerd[1621]: time="2025-10-27T16:20:25.010306051Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 16:20:25.010372 containerd[1621]: time="2025-10-27T16:20:25.010349292Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 16:20:25.010372 containerd[1621]: time="2025-10-27T16:20:25.010361636Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 16:20:25.010372 containerd[1621]: time="2025-10-27T16:20:25.010369540Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 16:20:25.010469 containerd[1621]: time="2025-10-27T16:20:25.010379639Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 16:20:25.010498 containerd[1621]: time="2025-10-27T16:20:25.010388346Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 16:20:25.010498 containerd[1621]: time="2025-10-27T16:20:25.010489636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 16:20:25.010535 containerd[1621]: time="2025-10-27T16:20:25.010500406Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 16:20:25.010535 containerd[1621]: time="2025-10-27T16:20:25.010528318Z" level=info msg="runtime interface created" Oct 27 16:20:25.010535 containerd[1621]: time="2025-10-27T16:20:25.010533618Z" level=info msg="created NRI interface" Oct 27 16:20:25.010590 containerd[1621]: time="2025-10-27T16:20:25.010542174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 16:20:25.010590 containerd[1621]: time="2025-10-27T16:20:25.010552784Z" level=info msg="Connect containerd service" Oct 27 16:20:25.010632 containerd[1621]: time="2025-10-27T16:20:25.010589633Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 16:20:25.011676 containerd[1621]: time="2025-10-27T16:20:25.011634412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 16:20:25.258124 tar[1617]: linux-amd64/README.md Oct 27 16:20:25.284217 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 16:20:25.342365 containerd[1621]: time="2025-10-27T16:20:25.342307222Z" level=info msg="Start subscribing containerd event" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343283783Z" level=info msg="Start recovering state" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343510489Z" level=info msg="Start event monitor" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343543931Z" level=info msg="Start cni network conf syncer for default" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343562686Z" level=info msg="Start streaming server" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343577544Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343590899Z" level=info msg="runtime interface starting up..." Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343600457Z" level=info msg="starting plugins..." Oct 27 16:20:25.344375 containerd[1621]: time="2025-10-27T16:20:25.343621637Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 16:20:25.345425 containerd[1621]: time="2025-10-27T16:20:25.345390404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 16:20:25.345710 containerd[1621]: time="2025-10-27T16:20:25.345691639Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 16:20:25.345856 containerd[1621]: time="2025-10-27T16:20:25.345840799Z" level=info msg="containerd successfully booted in 0.388420s" Oct 27 16:20:25.346062 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 16:20:26.608404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:26.611275 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 16:20:26.613540 systemd[1]: Startup finished in 2.634s (kernel) + 6.820s (initrd) + 5.227s (userspace) = 14.682s. Oct 27 16:20:26.643860 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 16:20:26.812140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 16:20:26.813358 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:53494.service - OpenSSH per-connection server daemon (10.0.0.1:53494). Oct 27 16:20:26.968827 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 53494 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:26.971868 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:26.981994 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 16:20:26.983361 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 16:20:26.991527 systemd-logind[1595]: New session 1 of user core. Oct 27 16:20:27.010770 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 16:20:27.015003 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 16:20:27.029707 (systemd)[1738]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 16:20:27.032376 systemd-logind[1595]: New session c1 of user core. Oct 27 16:20:27.292484 systemd[1738]: Queued start job for default target default.target. Oct 27 16:20:27.305267 kubelet[1720]: E1027 16:20:27.305212 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 16:20:27.308531 systemd[1738]: Created slice app.slice - User Application Slice. Oct 27 16:20:27.308553 systemd[1738]: Reached target paths.target - Paths. Oct 27 16:20:27.308592 systemd[1738]: Reached target timers.target - Timers. Oct 27 16:20:27.309379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 16:20:27.309751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 16:20:27.310250 systemd[1]: kubelet.service: Consumed 2.129s CPU time, 257.4M memory peak. Oct 27 16:20:27.310258 systemd[1738]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 16:20:27.323228 systemd[1738]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 16:20:27.323366 systemd[1738]: Reached target sockets.target - Sockets. Oct 27 16:20:27.323421 systemd[1738]: Reached target basic.target - Basic System. Oct 27 16:20:27.323480 systemd[1738]: Reached target default.target - Main User Target. Oct 27 16:20:27.323532 systemd[1738]: Startup finished in 203ms. Oct 27 16:20:27.323917 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 16:20:27.340316 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 16:20:27.407610 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Oct 27 16:20:27.478373 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:27.480094 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:27.485662 systemd-logind[1595]: New session 2 of user core. Oct 27 16:20:27.500458 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 16:20:27.555749 sshd[1753]: Connection closed by 10.0.0.1 port 53498 Oct 27 16:20:27.556279 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 27 16:20:27.571687 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:53498.service: Deactivated successfully. Oct 27 16:20:27.574434 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 16:20:27.575181 systemd-logind[1595]: Session 2 logged out. Waiting for processes to exit. Oct 27 16:20:27.578133 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). Oct 27 16:20:27.578680 systemd-logind[1595]: Removed session 2. Oct 27 16:20:27.641244 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:27.642613 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:27.646928 systemd-logind[1595]: New session 3 of user core. Oct 27 16:20:27.654314 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 16:20:27.705903 sshd[1763]: Connection closed by 10.0.0.1 port 53504 Oct 27 16:20:27.706398 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Oct 27 16:20:27.727932 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:53504.service: Deactivated successfully. Oct 27 16:20:27.729852 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 16:20:27.730828 systemd-logind[1595]: Session 3 logged out. Waiting for processes to exit. Oct 27 16:20:27.733733 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:53512.service - OpenSSH per-connection server daemon (10.0.0.1:53512). Oct 27 16:20:27.734325 systemd-logind[1595]: Removed session 3. Oct 27 16:20:27.800260 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 53512 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:27.801474 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:27.805531 systemd-logind[1595]: New session 4 of user core. Oct 27 16:20:27.822283 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 16:20:27.878506 sshd[1772]: Connection closed by 10.0.0.1 port 53512 Oct 27 16:20:27.878785 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Oct 27 16:20:27.891748 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:53512.service: Deactivated successfully. Oct 27 16:20:27.893606 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 16:20:27.894340 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Oct 27 16:20:27.896996 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:53522.service - OpenSSH per-connection server daemon (10.0.0.1:53522). Oct 27 16:20:27.897614 systemd-logind[1595]: Removed session 4. Oct 27 16:20:27.953357 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 53522 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:27.954601 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:27.958880 systemd-logind[1595]: New session 5 of user core. Oct 27 16:20:27.969292 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 16:20:28.032574 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 16:20:28.032913 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 16:20:28.048701 sudo[1783]: pam_unix(sudo:session): session closed for user root Oct 27 16:20:28.050453 sshd[1782]: Connection closed by 10.0.0.1 port 53522 Oct 27 16:20:28.050882 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Oct 27 16:20:28.059799 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:53522.service: Deactivated successfully. Oct 27 16:20:28.061514 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 16:20:28.062243 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Oct 27 16:20:28.064788 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:53530.service - OpenSSH per-connection server daemon (10.0.0.1:53530). Oct 27 16:20:28.065455 systemd-logind[1595]: Removed session 5. Oct 27 16:20:28.230497 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 53530 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:28.231901 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:28.236270 systemd-logind[1595]: New session 6 of user core. Oct 27 16:20:28.250285 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 16:20:28.305839 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 16:20:28.306185 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 16:20:28.312463 sudo[1794]: pam_unix(sudo:session): session closed for user root Oct 27 16:20:28.320625 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 16:20:28.320942 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 16:20:28.331866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 16:20:28.379186 augenrules[1816]: No rules Oct 27 16:20:28.380805 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 16:20:28.381103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 16:20:28.382283 sudo[1793]: pam_unix(sudo:session): session closed for user root Oct 27 16:20:28.384093 sshd[1792]: Connection closed by 10.0.0.1 port 53530 Oct 27 16:20:28.384405 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Oct 27 16:20:28.396463 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:53530.service: Deactivated successfully. Oct 27 16:20:28.398111 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 16:20:28.398875 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Oct 27 16:20:28.401494 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:53540.service - OpenSSH per-connection server daemon (10.0.0.1:53540). Oct 27 16:20:28.402117 systemd-logind[1595]: Removed session 6. Oct 27 16:20:28.455846 sshd[1825]: Accepted publickey for core from 10.0.0.1 port 53540 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:20:28.457099 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:20:28.461767 systemd-logind[1595]: New session 7 of user core. Oct 27 16:20:28.475276 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 16:20:28.529656 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 16:20:28.529990 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 16:20:29.245367 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 16:20:29.271476 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 16:20:29.786214 dockerd[1850]: time="2025-10-27T16:20:29.786120799Z" level=info msg="Starting up" Oct 27 16:20:29.786947 dockerd[1850]: time="2025-10-27T16:20:29.786924817Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 16:20:29.812542 dockerd[1850]: time="2025-10-27T16:20:29.812484981Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 16:20:30.352943 dockerd[1850]: time="2025-10-27T16:20:30.352867160Z" level=info msg="Loading containers: start." Oct 27 16:20:30.364196 kernel: Initializing XFRM netlink socket Oct 27 16:20:30.645825 systemd-networkd[1519]: docker0: Link UP Oct 27 16:20:30.652231 dockerd[1850]: time="2025-10-27T16:20:30.652131488Z" level=info msg="Loading containers: done." Oct 27 16:20:30.667487 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2939330524-merged.mount: Deactivated successfully. Oct 27 16:20:30.671016 dockerd[1850]: time="2025-10-27T16:20:30.670957313Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 16:20:30.671101 dockerd[1850]: time="2025-10-27T16:20:30.671086746Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 16:20:30.671244 dockerd[1850]: time="2025-10-27T16:20:30.671219855Z" level=info msg="Initializing buildkit" Oct 27 16:20:30.702629 dockerd[1850]: time="2025-10-27T16:20:30.702582802Z" level=info msg="Completed buildkit initialization" Oct 27 16:20:30.709004 dockerd[1850]: time="2025-10-27T16:20:30.708953610Z" level=info msg="Daemon has completed initialization" Oct 27 16:20:30.709122 dockerd[1850]: time="2025-10-27T16:20:30.709061923Z" level=info msg="API listen on /run/docker.sock" Oct 27 16:20:30.709358 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 16:20:31.292776 containerd[1621]: time="2025-10-27T16:20:31.292708789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 27 16:20:31.842308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756552469.mount: Deactivated successfully. Oct 27 16:20:32.591869 containerd[1621]: time="2025-10-27T16:20:32.591790942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:32.592645 containerd[1621]: time="2025-10-27T16:20:32.592464795Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Oct 27 16:20:32.593781 containerd[1621]: time="2025-10-27T16:20:32.593719899Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:32.596369 containerd[1621]: time="2025-10-27T16:20:32.596320847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:32.600581 containerd[1621]: time="2025-10-27T16:20:32.600549367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.30778331s" Oct 27 16:20:32.600635 containerd[1621]: time="2025-10-27T16:20:32.600581287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 27 16:20:32.601239 containerd[1621]: time="2025-10-27T16:20:32.601188696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 27 16:20:34.111151 containerd[1621]: time="2025-10-27T16:20:34.111081494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:34.112737 containerd[1621]: time="2025-10-27T16:20:34.112679711Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Oct 27 16:20:34.113988 containerd[1621]: time="2025-10-27T16:20:34.113926980Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:34.117189 containerd[1621]: time="2025-10-27T16:20:34.117145155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:34.118109 containerd[1621]: time="2025-10-27T16:20:34.118069038Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.516846088s" Oct 27 16:20:34.118109 containerd[1621]: time="2025-10-27T16:20:34.118104535Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 27 16:20:34.118739 containerd[1621]: time="2025-10-27T16:20:34.118566110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 27 16:20:35.652431 containerd[1621]: time="2025-10-27T16:20:35.652362075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:35.653259 containerd[1621]: time="2025-10-27T16:20:35.653223971Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15716956" Oct 27 16:20:35.654396 containerd[1621]: time="2025-10-27T16:20:35.654358339Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:35.657198 containerd[1621]: time="2025-10-27T16:20:35.657134084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:35.658311 containerd[1621]: time="2025-10-27T16:20:35.658244887Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.539646667s" Oct 27 16:20:35.658311 containerd[1621]: time="2025-10-27T16:20:35.658285173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 27 16:20:35.658806 containerd[1621]: time="2025-10-27T16:20:35.658773980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 27 16:20:37.258944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880338084.mount: Deactivated successfully. Oct 27 16:20:37.462864 containerd[1621]: time="2025-10-27T16:20:37.462801275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:37.463821 containerd[1621]: time="2025-10-27T16:20:37.463783407Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25960977" Oct 27 16:20:37.465224 containerd[1621]: time="2025-10-27T16:20:37.465175237Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:37.467035 containerd[1621]: time="2025-10-27T16:20:37.466999518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:37.467568 containerd[1621]: time="2025-10-27T16:20:37.467521086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.808716209s" Oct 27 16:20:37.467568 containerd[1621]: time="2025-10-27T16:20:37.467567243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 27 16:20:37.468379 containerd[1621]: time="2025-10-27T16:20:37.468338429Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 27 16:20:37.560102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 16:20:37.561960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:37.834895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:37.840201 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 16:20:37.907106 kubelet[2153]: E1027 16:20:37.907035 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 16:20:37.913126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 16:20:37.913354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 16:20:37.913786 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.6M memory peak. Oct 27 16:20:38.361009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106415570.mount: Deactivated successfully. Oct 27 16:20:39.886511 containerd[1621]: time="2025-10-27T16:20:39.886434111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:39.887402 containerd[1621]: time="2025-10-27T16:20:39.887318470Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21693590" Oct 27 16:20:39.888819 containerd[1621]: time="2025-10-27T16:20:39.888775282Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:39.894478 containerd[1621]: time="2025-10-27T16:20:39.894426660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:39.895469 containerd[1621]: time="2025-10-27T16:20:39.895424401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.427056848s" Oct 27 16:20:39.895469 containerd[1621]: time="2025-10-27T16:20:39.895463505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 27 16:20:39.896014 containerd[1621]: time="2025-10-27T16:20:39.895985864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 27 16:20:40.632615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116967313.mount: Deactivated successfully. Oct 27 16:20:40.638856 containerd[1621]: time="2025-10-27T16:20:40.638793704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:40.639719 containerd[1621]: time="2025-10-27T16:20:40.639679976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Oct 27 16:20:40.641082 containerd[1621]: time="2025-10-27T16:20:40.641014489Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:40.643623 containerd[1621]: time="2025-10-27T16:20:40.643582184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:40.644292 containerd[1621]: time="2025-10-27T16:20:40.644255046Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 748.243534ms" Oct 27 16:20:40.644292 containerd[1621]: time="2025-10-27T16:20:40.644290563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 27 16:20:40.644856 containerd[1621]: time="2025-10-27T16:20:40.644814676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 27 16:20:44.069362 containerd[1621]: time="2025-10-27T16:20:44.069293806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:44.070244 containerd[1621]: time="2025-10-27T16:20:44.070138531Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73504897" Oct 27 16:20:44.071519 containerd[1621]: time="2025-10-27T16:20:44.071484304Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:44.074858 containerd[1621]: time="2025-10-27T16:20:44.074824879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:20:44.075894 containerd[1621]: time="2025-10-27T16:20:44.075835344Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.430982657s" Oct 27 16:20:44.075894 containerd[1621]: time="2025-10-27T16:20:44.075887432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 27 16:20:46.691176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:46.691433 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.6M memory peak. Oct 27 16:20:46.694110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:46.721076 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Oct 27 16:20:46.721095 systemd[1]: Reloading... Oct 27 16:20:46.886190 zram_generator::config[2335]: No configuration found. Oct 27 16:20:47.473356 systemd[1]: Reloading finished in 751 ms. Oct 27 16:20:47.540088 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 16:20:47.540208 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 16:20:47.540561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:47.540614 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.2M memory peak. Oct 27 16:20:47.542216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:47.721589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:47.726265 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 16:20:47.776763 kubelet[2383]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 16:20:47.776763 kubelet[2383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 16:20:47.777214 kubelet[2383]: I1027 16:20:47.776805 2383 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 16:20:47.977002 kubelet[2383]: I1027 16:20:47.976848 2383 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 16:20:47.977002 kubelet[2383]: I1027 16:20:47.976891 2383 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 16:20:47.979289 kubelet[2383]: I1027 16:20:47.979252 2383 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 16:20:47.979289 kubelet[2383]: I1027 16:20:47.979276 2383 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 16:20:47.979597 kubelet[2383]: I1027 16:20:47.979563 2383 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 16:20:48.261308 kubelet[2383]: E1027 16:20:48.258891 2383 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 16:20:48.261728 kubelet[2383]: I1027 16:20:48.261619 2383 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 16:20:48.267203 kubelet[2383]: I1027 16:20:48.267147 2383 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 16:20:48.272946 kubelet[2383]: I1027 16:20:48.272911 2383 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 16:20:48.273796 kubelet[2383]: I1027 16:20:48.273752 2383 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 16:20:48.273976 kubelet[2383]: I1027 16:20:48.273790 2383 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 16:20:48.274080 kubelet[2383]: I1027 16:20:48.273991 2383 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 16:20:48.274080 kubelet[2383]: I1027 16:20:48.274000 2383 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 16:20:48.274145 kubelet[2383]: I1027 16:20:48.274129 2383 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 16:20:48.278146 kubelet[2383]: I1027 16:20:48.278112 2383 state_mem.go:36] "Initialized new in-memory state store" Oct 27 16:20:48.278407 kubelet[2383]: I1027 16:20:48.278381 2383 kubelet.go:475] "Attempting to sync node with API server" Oct 27 16:20:48.278407 kubelet[2383]: I1027 16:20:48.278399 2383 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 16:20:48.278464 kubelet[2383]: I1027 16:20:48.278433 2383 kubelet.go:387] "Adding apiserver pod source" Oct 27 16:20:48.278464 kubelet[2383]: I1027 16:20:48.278457 2383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 16:20:48.278934 kubelet[2383]: E1027 16:20:48.278895 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 16:20:48.279270 kubelet[2383]: E1027 16:20:48.279231 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 16:20:48.281855 kubelet[2383]: I1027 16:20:48.281817 2383 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 27 16:20:48.283065 kubelet[2383]: I1027 16:20:48.283004 2383 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 16:20:48.283065 kubelet[2383]: I1027 16:20:48.283038 2383 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 16:20:48.283252 kubelet[2383]: W1027 16:20:48.283103 2383 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 16:20:48.287632 kubelet[2383]: I1027 16:20:48.287581 2383 server.go:1262] "Started kubelet" Oct 27 16:20:48.287774 kubelet[2383]: I1027 16:20:48.287672 2383 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 16:20:48.289624 kubelet[2383]: I1027 16:20:48.289190 2383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 16:20:48.293905 kubelet[2383]: I1027 16:20:48.293862 2383 server.go:310] "Adding debug handlers to kubelet server" Oct 27 16:20:48.295761 kubelet[2383]: E1027 16:20:48.294664 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872658a26e24812 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 16:20:48.287541266 +0000 UTC m=+0.554126145,LastTimestamp:2025-10-27 16:20:48.287541266 +0000 UTC m=+0.554126145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 16:20:48.295962 kubelet[2383]: I1027 16:20:48.295947 2383 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 16:20:48.296425 kubelet[2383]: I1027 16:20:48.296406 2383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 16:20:48.297578 kubelet[2383]: E1027 16:20:48.297535 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:48.297870 kubelet[2383]: I1027 16:20:48.297852 2383 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 16:20:48.297999 kubelet[2383]: I1027 16:20:48.297987 2383 reconciler.go:29] "Reconciler: start to sync state" Oct 27 16:20:48.299695 kubelet[2383]: E1027 16:20:48.299643 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 16:20:48.299872 kubelet[2383]: E1027 16:20:48.299808 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Oct 27 16:20:48.301926 kubelet[2383]: I1027 16:20:48.301729 2383 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 16:20:48.301926 kubelet[2383]: I1027 16:20:48.301845 2383 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 16:20:48.302010 kubelet[2383]: I1027 16:20:48.301944 2383 factory.go:223] Registration of the containerd container factory successfully Oct 27 16:20:48.302010 kubelet[2383]: I1027 16:20:48.301958 2383 factory.go:223] Registration of the systemd container factory successfully Oct 27 16:20:48.302260 kubelet[2383]: I1027 16:20:48.302224 2383 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 16:20:48.302443 kubelet[2383]: E1027 16:20:48.302419 2383 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 16:20:48.303460 kubelet[2383]: I1027 16:20:48.303425 2383 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 16:20:48.315366 kubelet[2383]: I1027 16:20:48.315321 2383 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 16:20:48.316946 kubelet[2383]: I1027 16:20:48.316929 2383 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 16:20:48.317013 kubelet[2383]: I1027 16:20:48.316955 2383 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 16:20:48.317013 kubelet[2383]: I1027 16:20:48.316990 2383 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 16:20:48.317085 kubelet[2383]: I1027 16:20:48.317056 2383 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 16:20:48.317085 kubelet[2383]: E1027 16:20:48.317052 2383 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 16:20:48.317133 kubelet[2383]: I1027 16:20:48.317083 2383 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 16:20:48.317133 kubelet[2383]: I1027 16:20:48.317121 2383 state_mem.go:36] "Initialized new in-memory state store" Oct 27 16:20:48.319988 kubelet[2383]: E1027 16:20:48.319862 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 16:20:48.321463 kubelet[2383]: I1027 16:20:48.321439 2383 policy_none.go:49] "None policy: Start" Oct 27 16:20:48.321505 kubelet[2383]: I1027 16:20:48.321469 2383 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 16:20:48.321505 kubelet[2383]: I1027 16:20:48.321483 2383 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 16:20:48.323979 kubelet[2383]: I1027 16:20:48.323937 2383 policy_none.go:47] "Start" Oct 27 16:20:48.328099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 16:20:48.344081 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 16:20:48.349796 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 16:20:48.366556 kubelet[2383]: E1027 16:20:48.366494 2383 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 16:20:48.366785 kubelet[2383]: I1027 16:20:48.366739 2383 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 16:20:48.366785 kubelet[2383]: I1027 16:20:48.366758 2383 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 16:20:48.367045 kubelet[2383]: I1027 16:20:48.367024 2383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 16:20:48.368369 kubelet[2383]: E1027 16:20:48.368314 2383 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 16:20:48.368439 kubelet[2383]: E1027 16:20:48.368376 2383 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 16:20:48.468271 kubelet[2383]: I1027 16:20:48.468208 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 16:20:48.468790 kubelet[2383]: E1027 16:20:48.468755 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 27 16:20:48.499138 kubelet[2383]: I1027 16:20:48.499107 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:48.499220 kubelet[2383]: I1027 16:20:48.499147 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:48.499220 kubelet[2383]: I1027 16:20:48.499190 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:48.500468 kubelet[2383]: E1027 16:20:48.500412 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Oct 27 16:20:48.599597 kubelet[2383]: I1027 16:20:48.599330 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:48.599597 kubelet[2383]: I1027 16:20:48.599415 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:48.599597 kubelet[2383]: I1027 16:20:48.599443 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:48.599597 kubelet[2383]: I1027 16:20:48.599582 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:48.601714 kubelet[2383]: I1027 16:20:48.599612 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:48.601714 kubelet[2383]: I1027 16:20:48.599636 2383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 16:20:48.600358 systemd[1]: Created slice kubepods-burstable-pode1ea1f1982432d4b60e0b7f95d86608f.slice - libcontainer container kubepods-burstable-pode1ea1f1982432d4b60e0b7f95d86608f.slice. Oct 27 16:20:48.609146 kubelet[2383]: E1027 16:20:48.609096 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:48.611402 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 27 16:20:48.612391 kubelet[2383]: E1027 16:20:48.612359 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:48.613245 containerd[1621]: time="2025-10-27T16:20:48.613210703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1ea1f1982432d4b60e0b7f95d86608f,Namespace:kube-system,Attempt:0,}" Oct 27 16:20:48.619461 kubelet[2383]: E1027 16:20:48.619421 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:48.622907 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 27 16:20:48.629649 kubelet[2383]: E1027 16:20:48.629610 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:48.670851 kubelet[2383]: I1027 16:20:48.670803 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 16:20:48.671307 kubelet[2383]: E1027 16:20:48.671237 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 27 16:20:48.902494 kubelet[2383]: E1027 16:20:48.902368 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Oct 27 16:20:48.923699 kubelet[2383]: E1027 16:20:48.923623 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:48.924409 containerd[1621]: time="2025-10-27T16:20:48.924341654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 27 16:20:48.933429 kubelet[2383]: E1027 16:20:48.933380 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:48.933972 containerd[1621]: time="2025-10-27T16:20:48.933820270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 27 16:20:49.073085 kubelet[2383]: I1027 16:20:49.073042 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 16:20:49.073599 kubelet[2383]: E1027 16:20:49.073551 2383 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Oct 27 16:20:49.210218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643953157.mount: Deactivated successfully. Oct 27 16:20:49.216131 containerd[1621]: time="2025-10-27T16:20:49.216079747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 16:20:49.219307 containerd[1621]: time="2025-10-27T16:20:49.219277875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 16:20:49.220646 containerd[1621]: time="2025-10-27T16:20:49.220593352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 16:20:49.222550 containerd[1621]: time="2025-10-27T16:20:49.222498374Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 16:20:49.223495 containerd[1621]: time="2025-10-27T16:20:49.223455760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 16:20:49.224657 containerd[1621]: time="2025-10-27T16:20:49.224611708Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 16:20:49.225873 containerd[1621]: time="2025-10-27T16:20:49.225832848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 16:20:49.226796 containerd[1621]: time="2025-10-27T16:20:49.226728407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 610.443099ms" Oct 27 16:20:49.226897 containerd[1621]: time="2025-10-27T16:20:49.226876856Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 16:20:49.228853 containerd[1621]: time="2025-10-27T16:20:49.228328268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 301.18002ms" Oct 27 16:20:49.228905 kubelet[2383]: E1027 16:20:49.228708 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 16:20:49.232841 containerd[1621]: time="2025-10-27T16:20:49.232800114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 296.608507ms" Oct 27 16:20:49.260392 containerd[1621]: time="2025-10-27T16:20:49.260329942Z" level=info msg="connecting to shim 69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309" address="unix:///run/containerd/s/d5fb68028857416ab4f1d41062669dc73fcf2acf06378f6973f322a723363759" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:20:49.271182 containerd[1621]: time="2025-10-27T16:20:49.270196095Z" level=info msg="connecting to shim a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb" address="unix:///run/containerd/s/655f6f1061b3701c8b3dc89730b2f98929d36e65eb051d8f0c3c453d16f94bd7" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:20:49.280987 containerd[1621]: time="2025-10-27T16:20:49.280936347Z" level=info msg="connecting to shim 30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df" address="unix:///run/containerd/s/6d9a0e040edf9bc0dbb88101209a7a496fb3b48decffc66980ace445882324e8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:20:49.310378 systemd[1]: Started cri-containerd-69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309.scope - libcontainer container 69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309. Oct 27 16:20:49.315463 kubelet[2383]: E1027 16:20:49.315422 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 16:20:49.315620 systemd[1]: Started cri-containerd-a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb.scope - libcontainer container a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb. Oct 27 16:20:49.328801 kubelet[2383]: E1027 16:20:49.328733 2383 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 16:20:49.371306 systemd[1]: Started cri-containerd-30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df.scope - libcontainer container 30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df. Oct 27 16:20:49.433381 containerd[1621]: time="2025-10-27T16:20:49.433334187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1ea1f1982432d4b60e0b7f95d86608f,Namespace:kube-system,Attempt:0,} returns sandbox id \"69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309\"" Oct 27 16:20:49.434672 containerd[1621]: time="2025-10-27T16:20:49.434443607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df\"" Oct 27 16:20:49.435444 kubelet[2383]: E1027 16:20:49.435408 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:49.435658 kubelet[2383]: E1027 16:20:49.435636 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:49.441987 containerd[1621]: time="2025-10-27T16:20:49.441927713Z" level=info msg="CreateContainer within sandbox \"69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 16:20:49.442269 containerd[1621]: time="2025-10-27T16:20:49.442092342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb\"" Oct 27 16:20:49.442704 kubelet[2383]: E1027 16:20:49.442685 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:49.443937 containerd[1621]: time="2025-10-27T16:20:49.443910912Z" level=info msg="CreateContainer within sandbox \"30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 16:20:49.447896 containerd[1621]: time="2025-10-27T16:20:49.447854588Z" level=info msg="CreateContainer within sandbox \"a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 16:20:49.455252 containerd[1621]: time="2025-10-27T16:20:49.455218648Z" level=info msg="Container 60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:20:49.459050 containerd[1621]: time="2025-10-27T16:20:49.459011701Z" level=info msg="Container 5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:20:49.463861 containerd[1621]: time="2025-10-27T16:20:49.463774073Z" level=info msg="Container d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:20:49.469709 containerd[1621]: time="2025-10-27T16:20:49.469669719Z" level=info msg="CreateContainer within sandbox \"69cf18cdb86a1789e591b9f7bdaa8a0a84563e8baadb2886541d1a069cb20309\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c\"" Oct 27 16:20:49.470472 containerd[1621]: time="2025-10-27T16:20:49.470433351Z" level=info msg="StartContainer for \"60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c\"" Oct 27 16:20:49.471698 containerd[1621]: time="2025-10-27T16:20:49.471652417Z" level=info msg="connecting to shim 60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c" address="unix:///run/containerd/s/d5fb68028857416ab4f1d41062669dc73fcf2acf06378f6973f322a723363759" protocol=ttrpc version=3 Oct 27 16:20:49.473425 containerd[1621]: time="2025-10-27T16:20:49.473384265Z" level=info msg="CreateContainer within sandbox \"30af8b91fae98276beca66efa662ccb6945c60a912bc0746ec4066132e0876df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318\"" Oct 27 16:20:49.473793 containerd[1621]: time="2025-10-27T16:20:49.473733530Z" level=info msg="StartContainer for \"5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318\"" Oct 27 16:20:49.474940 containerd[1621]: time="2025-10-27T16:20:49.474915497Z" level=info msg="connecting to shim 5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318" address="unix:///run/containerd/s/6d9a0e040edf9bc0dbb88101209a7a496fb3b48decffc66980ace445882324e8" protocol=ttrpc version=3 Oct 27 16:20:49.475936 containerd[1621]: time="2025-10-27T16:20:49.475900484Z" level=info msg="CreateContainer within sandbox \"a1ee817f3a6b8e85f5be1fe156852764f4d76e878228e0c3ff66d76cb2fa2fdb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961\"" Oct 27 16:20:49.476569 containerd[1621]: time="2025-10-27T16:20:49.476542608Z" level=info msg="StartContainer for \"d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961\"" Oct 27 16:20:49.477538 containerd[1621]: time="2025-10-27T16:20:49.477502628Z" level=info msg="connecting to shim d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961" address="unix:///run/containerd/s/655f6f1061b3701c8b3dc89730b2f98929d36e65eb051d8f0c3c453d16f94bd7" protocol=ttrpc version=3 Oct 27 16:20:49.497314 systemd[1]: Started cri-containerd-5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318.scope - libcontainer container 5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318. Oct 27 16:20:49.498547 systemd[1]: Started cri-containerd-60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c.scope - libcontainer container 60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c. Oct 27 16:20:49.502308 systemd[1]: Started cri-containerd-d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961.scope - libcontainer container d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961. Oct 27 16:20:49.564786 containerd[1621]: time="2025-10-27T16:20:49.564734179Z" level=info msg="StartContainer for \"5e90ac95234fd2ae7fd662a787f876ff7c6134886772c07ca4c453c4d6f6f318\" returns successfully" Oct 27 16:20:49.568171 containerd[1621]: time="2025-10-27T16:20:49.568089100Z" level=info msg="StartContainer for \"60211290faedce7037efb0d0e51b4a5f0061be8169db1c1656bac74c2f08de2c\" returns successfully" Oct 27 16:20:49.583732 containerd[1621]: time="2025-10-27T16:20:49.583674749Z" level=info msg="StartContainer for \"d217c7c4a08ffa5ddc46095920221d7f42e442b2a8ecdc8e6ba93ab6dac7c961\" returns successfully" Oct 27 16:20:49.875550 kubelet[2383]: I1027 16:20:49.875380 2383 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 16:20:50.337732 kubelet[2383]: E1027 16:20:50.337621 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:50.338090 kubelet[2383]: E1027 16:20:50.337746 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:50.340176 kubelet[2383]: E1027 16:20:50.339632 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:50.340248 kubelet[2383]: E1027 16:20:50.340149 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:50.341456 kubelet[2383]: E1027 16:20:50.341435 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:50.341537 kubelet[2383]: E1027 16:20:50.341520 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:51.067602 kubelet[2383]: E1027 16:20:51.067543 2383 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 16:20:51.197588 kubelet[2383]: I1027 16:20:51.197490 2383 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 16:20:51.197588 kubelet[2383]: E1027 16:20:51.197531 2383 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 16:20:51.206607 kubelet[2383]: E1027 16:20:51.206571 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.307068 kubelet[2383]: E1027 16:20:51.307001 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.344196 kubelet[2383]: E1027 16:20:51.344010 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:51.344196 kubelet[2383]: E1027 16:20:51.344185 2383 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 16:20:51.344827 kubelet[2383]: E1027 16:20:51.344275 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:51.344827 kubelet[2383]: E1027 16:20:51.344407 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:51.407315 kubelet[2383]: E1027 16:20:51.407275 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.508170 kubelet[2383]: E1027 16:20:51.508089 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.608973 kubelet[2383]: E1027 16:20:51.608875 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.709550 kubelet[2383]: E1027 16:20:51.709461 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.810525 kubelet[2383]: E1027 16:20:51.810464 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:51.911209 kubelet[2383]: E1027 16:20:51.911071 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:52.011718 kubelet[2383]: E1027 16:20:52.011650 2383 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:52.198780 kubelet[2383]: I1027 16:20:52.198643 2383 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 16:20:52.205528 kubelet[2383]: I1027 16:20:52.205440 2383 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:52.209494 kubelet[2383]: I1027 16:20:52.209462 2383 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:52.279956 kubelet[2383]: I1027 16:20:52.279872 2383 apiserver.go:52] "Watching apiserver" Oct 27 16:20:52.282046 kubelet[2383]: E1027 16:20:52.282018 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:52.298710 kubelet[2383]: I1027 16:20:52.298681 2383 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 16:20:52.344324 kubelet[2383]: E1027 16:20:52.344269 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:52.344755 kubelet[2383]: E1027 16:20:52.344450 2383 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:53.171418 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... Oct 27 16:20:53.171439 systemd[1]: Reloading... Oct 27 16:20:53.256318 zram_generator::config[2719]: No configuration found. Oct 27 16:20:53.500908 systemd[1]: Reloading finished in 329 ms. Oct 27 16:20:53.526573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:53.556484 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 16:20:53.556837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:53.556897 systemd[1]: kubelet.service: Consumed 785ms CPU time, 125.8M memory peak. Oct 27 16:20:53.559082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 16:20:53.783670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 16:20:53.800701 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 16:20:53.846327 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 16:20:53.846327 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 16:20:53.846729 kubelet[2764]: I1027 16:20:53.846374 2764 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 16:20:53.855378 kubelet[2764]: I1027 16:20:53.855350 2764 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 16:20:53.855444 kubelet[2764]: I1027 16:20:53.855434 2764 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 16:20:53.855547 kubelet[2764]: I1027 16:20:53.855534 2764 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 16:20:53.855598 kubelet[2764]: I1027 16:20:53.855586 2764 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 16:20:53.856091 kubelet[2764]: I1027 16:20:53.856075 2764 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 16:20:53.858536 kubelet[2764]: I1027 16:20:53.858494 2764 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 16:20:53.861582 kubelet[2764]: I1027 16:20:53.861386 2764 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 16:20:53.865543 kubelet[2764]: I1027 16:20:53.865518 2764 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 16:20:53.871040 kubelet[2764]: I1027 16:20:53.871008 2764 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 16:20:53.871357 kubelet[2764]: I1027 16:20:53.871319 2764 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 16:20:53.871521 kubelet[2764]: I1027 16:20:53.871351 2764 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 16:20:53.871603 kubelet[2764]: I1027 16:20:53.871523 2764 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 16:20:53.871603 kubelet[2764]: I1027 16:20:53.871532 2764 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 16:20:53.871603 kubelet[2764]: I1027 16:20:53.871560 2764 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 16:20:53.872551 kubelet[2764]: I1027 16:20:53.872522 2764 state_mem.go:36] "Initialized new in-memory state store" Oct 27 16:20:53.872745 kubelet[2764]: I1027 16:20:53.872721 2764 kubelet.go:475] "Attempting to sync node with API server" Oct 27 16:20:53.872785 kubelet[2764]: I1027 16:20:53.872762 2764 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 16:20:53.873385 kubelet[2764]: I1027 16:20:53.872808 2764 kubelet.go:387] "Adding apiserver pod source" Oct 27 16:20:53.873385 kubelet[2764]: I1027 16:20:53.872841 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 16:20:53.874528 kubelet[2764]: I1027 16:20:53.874480 2764 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 27 16:20:53.875251 kubelet[2764]: I1027 16:20:53.875224 2764 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 16:20:53.875290 kubelet[2764]: I1027 16:20:53.875268 2764 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 16:20:53.880993 kubelet[2764]: I1027 16:20:53.879710 2764 server.go:1262] "Started kubelet" Oct 27 16:20:53.880993 kubelet[2764]: I1027 16:20:53.879845 2764 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 16:20:53.880993 kubelet[2764]: I1027 16:20:53.880186 2764 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 16:20:53.880993 kubelet[2764]: I1027 16:20:53.880250 2764 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 16:20:53.880993 kubelet[2764]: I1027 16:20:53.880595 2764 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 16:20:53.883786 kubelet[2764]: I1027 16:20:53.882672 2764 server.go:310] "Adding debug handlers to kubelet server" Oct 27 16:20:53.884697 kubelet[2764]: I1027 16:20:53.884669 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 16:20:53.888272 kubelet[2764]: I1027 16:20:53.888151 2764 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 16:20:53.888471 kubelet[2764]: E1027 16:20:53.888366 2764 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 16:20:53.888888 kubelet[2764]: I1027 16:20:53.888792 2764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 16:20:53.891552 kubelet[2764]: I1027 16:20:53.890783 2764 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 16:20:53.891552 kubelet[2764]: I1027 16:20:53.890976 2764 reconciler.go:29] "Reconciler: start to sync state" Oct 27 16:20:53.892836 kubelet[2764]: I1027 16:20:53.892782 2764 factory.go:223] Registration of the systemd container factory successfully Oct 27 16:20:53.893044 kubelet[2764]: I1027 16:20:53.893024 2764 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 16:20:53.897543 kubelet[2764]: I1027 16:20:53.896588 2764 factory.go:223] Registration of the containerd container factory successfully Oct 27 16:20:53.903143 kubelet[2764]: I1027 16:20:53.903096 2764 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 16:20:53.904393 kubelet[2764]: E1027 16:20:53.904364 2764 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 16:20:53.904480 kubelet[2764]: I1027 16:20:53.904464 2764 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 16:20:53.904503 kubelet[2764]: I1027 16:20:53.904482 2764 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 16:20:53.904524 kubelet[2764]: I1027 16:20:53.904508 2764 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 16:20:53.904579 kubelet[2764]: E1027 16:20:53.904553 2764 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943770 2764 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943791 2764 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943811 2764 state_mem.go:36] "Initialized new in-memory state store" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943941 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943951 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943970 2764 policy_none.go:49] "None policy: Start" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943979 2764 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.943989 2764 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.944073 2764 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 27 16:20:53.944771 kubelet[2764]: I1027 16:20:53.944081 2764 policy_none.go:47] "Start" Oct 27 16:20:53.949373 kubelet[2764]: E1027 16:20:53.949338 2764 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 16:20:53.949815 kubelet[2764]: I1027 16:20:53.949789 2764 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 16:20:53.949987 kubelet[2764]: I1027 16:20:53.949814 2764 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 16:20:53.950722 kubelet[2764]: I1027 16:20:53.950546 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 16:20:53.951644 kubelet[2764]: E1027 16:20:53.951617 2764 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 16:20:54.005585 kubelet[2764]: I1027 16:20:54.005529 2764 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 16:20:54.005778 kubelet[2764]: I1027 16:20:54.005675 2764 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.005778 kubelet[2764]: I1027 16:20:54.005762 2764 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.011185 kubelet[2764]: E1027 16:20:54.011113 2764 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 16:20:54.011561 kubelet[2764]: E1027 16:20:54.011444 2764 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.011561 kubelet[2764]: E1027 16:20:54.011500 2764 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.054836 kubelet[2764]: I1027 16:20:54.054754 2764 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 16:20:54.062384 kubelet[2764]: I1027 16:20:54.062351 2764 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 16:20:54.062534 kubelet[2764]: I1027 16:20:54.062411 2764 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 16:20:54.093679 kubelet[2764]: I1027 16:20:54.093605 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.093679 kubelet[2764]: I1027 16:20:54.093660 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.093907 kubelet[2764]: I1027 16:20:54.093697 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.093907 kubelet[2764]: I1027 16:20:54.093724 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.093907 kubelet[2764]: I1027 16:20:54.093762 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 16:20:54.093907 kubelet[2764]: I1027 16:20:54.093807 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.093907 kubelet[2764]: I1027 16:20:54.093834 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1ea1f1982432d4b60e0b7f95d86608f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1ea1f1982432d4b60e0b7f95d86608f\") " pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.094038 kubelet[2764]: I1027 16:20:54.093865 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.094038 kubelet[2764]: I1027 16:20:54.093912 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 16:20:54.312472 kubelet[2764]: E1027 16:20:54.312272 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.312472 kubelet[2764]: E1027 16:20:54.312281 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.312472 kubelet[2764]: E1027 16:20:54.312372 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.874863 kubelet[2764]: I1027 16:20:54.874818 2764 apiserver.go:52] "Watching apiserver" Oct 27 16:20:54.891743 kubelet[2764]: I1027 16:20:54.891645 2764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 16:20:54.922090 kubelet[2764]: I1027 16:20:54.921967 2764 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.922706 kubelet[2764]: E1027 16:20:54.922667 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.923173 kubelet[2764]: E1027 16:20:54.923118 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.939001 kubelet[2764]: E1027 16:20:54.938714 2764 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 16:20:54.939001 kubelet[2764]: E1027 16:20:54.938941 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:54.965021 kubelet[2764]: I1027 16:20:54.964615 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.9645988020000003 podStartE2EDuration="2.964598802s" podCreationTimestamp="2025-10-27 16:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:20:54.964438933 +0000 UTC m=+1.158770219" watchObservedRunningTime="2025-10-27 16:20:54.964598802 +0000 UTC m=+1.158930078" Oct 27 16:20:54.965021 kubelet[2764]: I1027 16:20:54.964725 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.96472011 podStartE2EDuration="2.96472011s" podCreationTimestamp="2025-10-27 16:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:20:54.925967054 +0000 UTC m=+1.120298340" watchObservedRunningTime="2025-10-27 16:20:54.96472011 +0000 UTC m=+1.159051386" Oct 27 16:20:54.972973 kubelet[2764]: I1027 16:20:54.972910 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.972888308 podStartE2EDuration="2.972888308s" podCreationTimestamp="2025-10-27 16:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:20:54.972732556 +0000 UTC m=+1.167063842" watchObservedRunningTime="2025-10-27 16:20:54.972888308 +0000 UTC m=+1.167219584" Oct 27 16:20:55.923548 kubelet[2764]: E1027 16:20:55.923488 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:55.923964 kubelet[2764]: E1027 16:20:55.923590 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:56.925724 kubelet[2764]: E1027 16:20:56.925672 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:57.367138 kubelet[2764]: E1027 16:20:57.367091 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:59.275927 kubelet[2764]: E1027 16:20:59.275880 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:20:59.931183 kubelet[2764]: E1027 16:20:59.931066 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:00.224090 kubelet[2764]: I1027 16:21:00.224049 2764 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 16:21:00.224522 containerd[1621]: time="2025-10-27T16:21:00.224483484Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 16:21:00.224892 kubelet[2764]: I1027 16:21:00.224690 2764 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 16:21:00.932971 kubelet[2764]: E1027 16:21:00.932915 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:01.234413 systemd[1]: Created slice kubepods-besteffort-podd6584dc9_2e8c_4900_b8b6_ebec5b072016.slice - libcontainer container kubepods-besteffort-podd6584dc9_2e8c_4900_b8b6_ebec5b072016.slice. Oct 27 16:21:01.240006 kubelet[2764]: I1027 16:21:01.239966 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6584dc9-2e8c-4900-b8b6-ebec5b072016-xtables-lock\") pod \"kube-proxy-hzf95\" (UID: \"d6584dc9-2e8c-4900-b8b6-ebec5b072016\") " pod="kube-system/kube-proxy-hzf95" Oct 27 16:21:01.240119 kubelet[2764]: I1027 16:21:01.240014 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6584dc9-2e8c-4900-b8b6-ebec5b072016-lib-modules\") pod \"kube-proxy-hzf95\" (UID: \"d6584dc9-2e8c-4900-b8b6-ebec5b072016\") " pod="kube-system/kube-proxy-hzf95" Oct 27 16:21:01.240119 kubelet[2764]: I1027 16:21:01.240037 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvv68\" (UniqueName: \"kubernetes.io/projected/d6584dc9-2e8c-4900-b8b6-ebec5b072016-kube-api-access-tvv68\") pod \"kube-proxy-hzf95\" (UID: \"d6584dc9-2e8c-4900-b8b6-ebec5b072016\") " pod="kube-system/kube-proxy-hzf95" Oct 27 16:21:01.240119 kubelet[2764]: I1027 16:21:01.240061 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6584dc9-2e8c-4900-b8b6-ebec5b072016-kube-proxy\") pod \"kube-proxy-hzf95\" (UID: \"d6584dc9-2e8c-4900-b8b6-ebec5b072016\") " pod="kube-system/kube-proxy-hzf95" Oct 27 16:21:01.420693 systemd[1]: Created slice kubepods-besteffort-pod222ed911_0bf3_4575_aedf_49154bd7908a.slice - libcontainer container kubepods-besteffort-pod222ed911_0bf3_4575_aedf_49154bd7908a.slice. Oct 27 16:21:01.441015 kubelet[2764]: I1027 16:21:01.440947 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/222ed911-0bf3-4575-aedf-49154bd7908a-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-jtq8k\" (UID: \"222ed911-0bf3-4575-aedf-49154bd7908a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jtq8k" Oct 27 16:21:01.441015 kubelet[2764]: I1027 16:21:01.441005 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlrw\" (UniqueName: \"kubernetes.io/projected/222ed911-0bf3-4575-aedf-49154bd7908a-kube-api-access-txlrw\") pod \"tigera-operator-65cdcdfd6d-jtq8k\" (UID: \"222ed911-0bf3-4575-aedf-49154bd7908a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jtq8k" Oct 27 16:21:01.547081 kubelet[2764]: E1027 16:21:01.546388 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:01.547623 containerd[1621]: time="2025-10-27T16:21:01.547587535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hzf95,Uid:d6584dc9-2e8c-4900-b8b6-ebec5b072016,Namespace:kube-system,Attempt:0,}" Oct 27 16:21:01.570946 containerd[1621]: time="2025-10-27T16:21:01.570898825Z" level=info msg="connecting to shim 72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb" address="unix:///run/containerd/s/440f9aea19d6aaecc0f13e2081744a8ae72aa474ff7c75cbf2fdb8fed18a5ae9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:01.595320 systemd[1]: Started cri-containerd-72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb.scope - libcontainer container 72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb. Oct 27 16:21:01.711471 containerd[1621]: time="2025-10-27T16:21:01.711425411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hzf95,Uid:d6584dc9-2e8c-4900-b8b6-ebec5b072016,Namespace:kube-system,Attempt:0,} returns sandbox id \"72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb\"" Oct 27 16:21:01.712404 kubelet[2764]: E1027 16:21:01.712367 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:01.795114 containerd[1621]: time="2025-10-27T16:21:01.795067873Z" level=info msg="CreateContainer within sandbox \"72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 16:21:01.797237 containerd[1621]: time="2025-10-27T16:21:01.796908116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jtq8k,Uid:222ed911-0bf3-4575-aedf-49154bd7908a,Namespace:tigera-operator,Attempt:0,}" Oct 27 16:21:01.823982 containerd[1621]: time="2025-10-27T16:21:01.823922865Z" level=info msg="Container b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:01.841860 containerd[1621]: time="2025-10-27T16:21:01.841799630Z" level=info msg="CreateContainer within sandbox \"72db7b1204ffecc05fd98fb2af58b635c1e89470d73c620d5e58f2497a9fe6eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f\"" Oct 27 16:21:01.843079 containerd[1621]: time="2025-10-27T16:21:01.842913543Z" level=info msg="StartContainer for \"b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f\"" Oct 27 16:21:01.845151 containerd[1621]: time="2025-10-27T16:21:01.845097584Z" level=info msg="connecting to shim b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f" address="unix:///run/containerd/s/440f9aea19d6aaecc0f13e2081744a8ae72aa474ff7c75cbf2fdb8fed18a5ae9" protocol=ttrpc version=3 Oct 27 16:21:01.877855 containerd[1621]: time="2025-10-27T16:21:01.877794863Z" level=info msg="connecting to shim 2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb" address="unix:///run/containerd/s/bc6f5abe2576d1f74061b1fc67a180f7241b1336a776d6d855aeaabef53f1e9c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:01.879348 systemd[1]: Started cri-containerd-b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f.scope - libcontainer container b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f. Oct 27 16:21:01.909432 systemd[1]: Started cri-containerd-2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb.scope - libcontainer container 2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb. Oct 27 16:21:01.944555 containerd[1621]: time="2025-10-27T16:21:01.944391561Z" level=info msg="StartContainer for \"b42194ab03b8a64199468d2c7778c3636a115fc0febb6bea270c645f500c4a3f\" returns successfully" Oct 27 16:21:01.959957 containerd[1621]: time="2025-10-27T16:21:01.959901665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jtq8k,Uid:222ed911-0bf3-4575-aedf-49154bd7908a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb\"" Oct 27 16:21:01.965610 containerd[1621]: time="2025-10-27T16:21:01.965565899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 16:21:02.942847 kubelet[2764]: E1027 16:21:02.942806 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:03.728318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790323335.mount: Deactivated successfully. Oct 27 16:21:04.078533 containerd[1621]: time="2025-10-27T16:21:04.078397487Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:04.079339 containerd[1621]: time="2025-10-27T16:21:04.079318754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Oct 27 16:21:04.080767 containerd[1621]: time="2025-10-27T16:21:04.080740125Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:04.082927 containerd[1621]: time="2025-10-27T16:21:04.082892902Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:04.083546 containerd[1621]: time="2025-10-27T16:21:04.083503227Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.117903061s" Oct 27 16:21:04.083546 containerd[1621]: time="2025-10-27T16:21:04.083543072Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 16:21:04.088711 containerd[1621]: time="2025-10-27T16:21:04.088667829Z" level=info msg="CreateContainer within sandbox \"2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 16:21:04.097839 containerd[1621]: time="2025-10-27T16:21:04.097785351Z" level=info msg="Container c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:04.102341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602845878.mount: Deactivated successfully. Oct 27 16:21:04.106176 containerd[1621]: time="2025-10-27T16:21:04.104604990Z" level=info msg="CreateContainer within sandbox \"2ffbc8d6935d95252bdaa11c1d85060e7252c6ed1c9a7e65575ade6331142dfb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578\"" Oct 27 16:21:04.107180 containerd[1621]: time="2025-10-27T16:21:04.107121621Z" level=info msg="StartContainer for \"c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578\"" Oct 27 16:21:04.108674 containerd[1621]: time="2025-10-27T16:21:04.108633695Z" level=info msg="connecting to shim c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578" address="unix:///run/containerd/s/bc6f5abe2576d1f74061b1fc67a180f7241b1336a776d6d855aeaabef53f1e9c" protocol=ttrpc version=3 Oct 27 16:21:04.130337 systemd[1]: Started cri-containerd-c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578.scope - libcontainer container c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578. Oct 27 16:21:04.160702 containerd[1621]: time="2025-10-27T16:21:04.160642548Z" level=info msg="StartContainer for \"c73158bf2dd903618a3ddb5ee0ecbc314b7e932d16dadf312ee18f3dc8c5c578\" returns successfully" Oct 27 16:21:04.956964 kubelet[2764]: I1027 16:21:04.956887 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hzf95" podStartSLOduration=3.95686438 podStartE2EDuration="3.95686438s" podCreationTimestamp="2025-10-27 16:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:21:03.258824216 +0000 UTC m=+9.453155522" watchObservedRunningTime="2025-10-27 16:21:04.95686438 +0000 UTC m=+11.151195676" Oct 27 16:21:04.957590 kubelet[2764]: I1027 16:21:04.957054 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-jtq8k" podStartSLOduration=1.834086138 podStartE2EDuration="3.957047058s" podCreationTimestamp="2025-10-27 16:21:01 +0000 UTC" firstStartedPulling="2025-10-27 16:21:01.961310602 +0000 UTC m=+8.155641888" lastFinishedPulling="2025-10-27 16:21:04.084271522 +0000 UTC m=+10.278602808" observedRunningTime="2025-10-27 16:21:04.957025356 +0000 UTC m=+11.151356653" watchObservedRunningTime="2025-10-27 16:21:04.957047058 +0000 UTC m=+11.151378344" Oct 27 16:21:05.378321 kubelet[2764]: E1027 16:21:05.377501 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:07.376995 kubelet[2764]: E1027 16:21:07.376604 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:09.387070 sudo[1829]: pam_unix(sudo:session): session closed for user root Oct 27 16:21:09.389179 sshd[1828]: Connection closed by 10.0.0.1 port 53540 Oct 27 16:21:09.389703 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Oct 27 16:21:09.395452 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:53540.service: Deactivated successfully. Oct 27 16:21:09.398115 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 16:21:09.398383 systemd[1]: session-7.scope: Consumed 5.205s CPU time, 226.8M memory peak. Oct 27 16:21:09.399978 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Oct 27 16:21:09.401764 systemd-logind[1595]: Removed session 7. Oct 27 16:21:09.933479 update_engine[1598]: I20251027 16:21:09.931232 1598 update_attempter.cc:509] Updating boot flags... Oct 27 16:21:13.483324 systemd[1]: Created slice kubepods-besteffort-poda04c98b3_cdb9_4fc6_8111_b79fd58c6139.slice - libcontainer container kubepods-besteffort-poda04c98b3_cdb9_4fc6_8111_b79fd58c6139.slice. Oct 27 16:21:13.525390 kubelet[2764]: I1027 16:21:13.525328 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04c98b3-cdb9-4fc6-8111-b79fd58c6139-tigera-ca-bundle\") pod \"calico-typha-b8d96bc64-s4lrj\" (UID: \"a04c98b3-cdb9-4fc6-8111-b79fd58c6139\") " pod="calico-system/calico-typha-b8d96bc64-s4lrj" Oct 27 16:21:13.525786 kubelet[2764]: I1027 16:21:13.525420 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a04c98b3-cdb9-4fc6-8111-b79fd58c6139-typha-certs\") pod \"calico-typha-b8d96bc64-s4lrj\" (UID: \"a04c98b3-cdb9-4fc6-8111-b79fd58c6139\") " pod="calico-system/calico-typha-b8d96bc64-s4lrj" Oct 27 16:21:13.525786 kubelet[2764]: I1027 16:21:13.525454 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br57l\" (UniqueName: \"kubernetes.io/projected/a04c98b3-cdb9-4fc6-8111-b79fd58c6139-kube-api-access-br57l\") pod \"calico-typha-b8d96bc64-s4lrj\" (UID: \"a04c98b3-cdb9-4fc6-8111-b79fd58c6139\") " pod="calico-system/calico-typha-b8d96bc64-s4lrj" Oct 27 16:21:13.685472 systemd[1]: Created slice kubepods-besteffort-podaa7cc05c_700a_4827_86c0_fb48d23f1f97.slice - libcontainer container kubepods-besteffort-podaa7cc05c_700a_4827_86c0_fb48d23f1f97.slice. Oct 27 16:21:13.727062 kubelet[2764]: I1027 16:21:13.726993 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa7cc05c-700a-4827-86c0-fb48d23f1f97-node-certs\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727062 kubelet[2764]: I1027 16:21:13.727035 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa7cc05c-700a-4827-86c0-fb48d23f1f97-tigera-ca-bundle\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727062 kubelet[2764]: I1027 16:21:13.727051 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-var-lib-calico\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727062 kubelet[2764]: I1027 16:21:13.727066 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h74xw\" (UniqueName: \"kubernetes.io/projected/aa7cc05c-700a-4827-86c0-fb48d23f1f97-kube-api-access-h74xw\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727342 kubelet[2764]: I1027 16:21:13.727091 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-flexvol-driver-host\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727342 kubelet[2764]: I1027 16:21:13.727110 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-policysync\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727342 kubelet[2764]: I1027 16:21:13.727180 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-var-run-calico\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727342 kubelet[2764]: I1027 16:21:13.727219 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-cni-bin-dir\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727342 kubelet[2764]: I1027 16:21:13.727233 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-cni-log-dir\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727521 kubelet[2764]: I1027 16:21:13.727249 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-lib-modules\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727521 kubelet[2764]: I1027 16:21:13.727279 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-xtables-lock\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.727521 kubelet[2764]: I1027 16:21:13.727320 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa7cc05c-700a-4827-86c0-fb48d23f1f97-cni-net-dir\") pod \"calico-node-vbqbv\" (UID: \"aa7cc05c-700a-4827-86c0-fb48d23f1f97\") " pod="calico-system/calico-node-vbqbv" Oct 27 16:21:13.792914 kubelet[2764]: E1027 16:21:13.792444 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:13.793025 containerd[1621]: time="2025-10-27T16:21:13.792978537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b8d96bc64-s4lrj,Uid:a04c98b3-cdb9-4fc6-8111-b79fd58c6139,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:13.814236 containerd[1621]: time="2025-10-27T16:21:13.814141696Z" level=info msg="connecting to shim e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63" address="unix:///run/containerd/s/a640fb1a6f58e4587dafe027ba31305228401f260df8b61c41896878fc6da2c8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:13.831431 kubelet[2764]: E1027 16:21:13.831390 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.831431 kubelet[2764]: W1027 16:21:13.831421 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.831623 kubelet[2764]: E1027 16:21:13.831469 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.841205 kubelet[2764]: E1027 16:21:13.841105 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.841205 kubelet[2764]: W1027 16:21:13.841127 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.841942 kubelet[2764]: E1027 16:21:13.841147 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.847893 kubelet[2764]: E1027 16:21:13.847748 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.847893 kubelet[2764]: W1027 16:21:13.847777 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.847893 kubelet[2764]: E1027 16:21:13.847803 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.859526 systemd[1]: Started cri-containerd-e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63.scope - libcontainer container e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63. Oct 27 16:21:13.868191 kubelet[2764]: E1027 16:21:13.867138 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:13.907205 kubelet[2764]: E1027 16:21:13.907144 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.907511 kubelet[2764]: W1027 16:21:13.907366 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.907594 kubelet[2764]: E1027 16:21:13.907579 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.908027 kubelet[2764]: E1027 16:21:13.908014 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.908114 kubelet[2764]: W1027 16:21:13.908102 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.908257 kubelet[2764]: E1027 16:21:13.908188 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.908483 kubelet[2764]: E1027 16:21:13.908455 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.908554 kubelet[2764]: W1027 16:21:13.908533 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.908673 kubelet[2764]: E1027 16:21:13.908606 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.909628 kubelet[2764]: E1027 16:21:13.909583 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.909696 kubelet[2764]: W1027 16:21:13.909624 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.909696 kubelet[2764]: E1027 16:21:13.909656 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.909966 kubelet[2764]: E1027 16:21:13.909952 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.910027 kubelet[2764]: W1027 16:21:13.910016 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.910185 kubelet[2764]: E1027 16:21:13.910090 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.910389 kubelet[2764]: E1027 16:21:13.910376 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.910468 kubelet[2764]: W1027 16:21:13.910455 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.910538 kubelet[2764]: E1027 16:21:13.910528 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.910849 kubelet[2764]: E1027 16:21:13.910797 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.910849 kubelet[2764]: W1027 16:21:13.910809 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.910849 kubelet[2764]: E1027 16:21:13.910818 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.911030 containerd[1621]: time="2025-10-27T16:21:13.910974589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b8d96bc64-s4lrj,Uid:a04c98b3-cdb9-4fc6-8111-b79fd58c6139,Namespace:calico-system,Attempt:0,} returns sandbox id \"e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63\"" Oct 27 16:21:13.911361 kubelet[2764]: E1027 16:21:13.911348 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.911622 kubelet[2764]: W1027 16:21:13.911444 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.912205 kubelet[2764]: E1027 16:21:13.912188 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.912377 kubelet[2764]: E1027 16:21:13.911827 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:13.912522 kubelet[2764]: E1027 16:21:13.912499 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.912522 kubelet[2764]: W1027 16:21:13.912517 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.912652 kubelet[2764]: E1027 16:21:13.912529 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.912720 kubelet[2764]: E1027 16:21:13.912702 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.912720 kubelet[2764]: W1027 16:21:13.912719 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.912803 kubelet[2764]: E1027 16:21:13.912727 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.913424 kubelet[2764]: E1027 16:21:13.913345 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.913424 kubelet[2764]: W1027 16:21:13.913359 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.913424 kubelet[2764]: E1027 16:21:13.913369 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.914461 kubelet[2764]: E1027 16:21:13.913851 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.914461 kubelet[2764]: W1027 16:21:13.914217 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.914461 kubelet[2764]: E1027 16:21:13.914229 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.914585 containerd[1621]: time="2025-10-27T16:21:13.914299956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 16:21:13.914636 kubelet[2764]: E1027 16:21:13.914609 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.914636 kubelet[2764]: W1027 16:21:13.914619 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.914636 kubelet[2764]: E1027 16:21:13.914629 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.914874 kubelet[2764]: E1027 16:21:13.914853 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.914913 kubelet[2764]: W1027 16:21:13.914889 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.914913 kubelet[2764]: E1027 16:21:13.914900 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.915255 kubelet[2764]: E1027 16:21:13.915234 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.915255 kubelet[2764]: W1027 16:21:13.915249 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.915456 kubelet[2764]: E1027 16:21:13.915259 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.915538 kubelet[2764]: E1027 16:21:13.915470 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.915538 kubelet[2764]: W1027 16:21:13.915479 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.915538 kubelet[2764]: E1027 16:21:13.915488 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.915787 kubelet[2764]: E1027 16:21:13.915745 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.916000 kubelet[2764]: W1027 16:21:13.915962 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.916000 kubelet[2764]: E1027 16:21:13.915980 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.916285 kubelet[2764]: E1027 16:21:13.916259 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.916393 kubelet[2764]: W1027 16:21:13.916342 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.916393 kubelet[2764]: E1027 16:21:13.916355 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.917370 kubelet[2764]: E1027 16:21:13.917269 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.917370 kubelet[2764]: W1027 16:21:13.917289 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.917370 kubelet[2764]: E1027 16:21:13.917301 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.917514 kubelet[2764]: E1027 16:21:13.917497 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.917514 kubelet[2764]: W1027 16:21:13.917509 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.917600 kubelet[2764]: E1027 16:21:13.917518 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.929735 kubelet[2764]: E1027 16:21:13.929701 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.929735 kubelet[2764]: W1027 16:21:13.929723 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.929823 kubelet[2764]: E1027 16:21:13.929744 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.929823 kubelet[2764]: I1027 16:21:13.929775 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/efa395f4-63b7-48dd-900f-15414929351b-varrun\") pod \"csi-node-driver-wtl2m\" (UID: \"efa395f4-63b7-48dd-900f-15414929351b\") " pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:13.929992 kubelet[2764]: E1027 16:21:13.929975 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.929992 kubelet[2764]: W1027 16:21:13.929987 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.930047 kubelet[2764]: E1027 16:21:13.929996 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.930047 kubelet[2764]: I1027 16:21:13.930015 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efa395f4-63b7-48dd-900f-15414929351b-kubelet-dir\") pod \"csi-node-driver-wtl2m\" (UID: \"efa395f4-63b7-48dd-900f-15414929351b\") " pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:13.930288 kubelet[2764]: E1027 16:21:13.930269 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.930288 kubelet[2764]: W1027 16:21:13.930285 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.930349 kubelet[2764]: E1027 16:21:13.930296 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.930555 kubelet[2764]: E1027 16:21:13.930542 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.930555 kubelet[2764]: W1027 16:21:13.930553 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.930607 kubelet[2764]: E1027 16:21:13.930561 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.930846 kubelet[2764]: E1027 16:21:13.930820 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.930884 kubelet[2764]: W1027 16:21:13.930844 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.930884 kubelet[2764]: E1027 16:21:13.930870 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.930925 kubelet[2764]: I1027 16:21:13.930908 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/efa395f4-63b7-48dd-900f-15414929351b-socket-dir\") pod \"csi-node-driver-wtl2m\" (UID: \"efa395f4-63b7-48dd-900f-15414929351b\") " pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:13.931196 kubelet[2764]: E1027 16:21:13.931178 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.931196 kubelet[2764]: W1027 16:21:13.931193 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.931288 kubelet[2764]: E1027 16:21:13.931205 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.931423 kubelet[2764]: E1027 16:21:13.931398 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.931423 kubelet[2764]: W1027 16:21:13.931408 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.931423 kubelet[2764]: E1027 16:21:13.931418 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.931690 kubelet[2764]: E1027 16:21:13.931665 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.931690 kubelet[2764]: W1027 16:21:13.931675 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.931690 kubelet[2764]: E1027 16:21:13.931684 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.931771 kubelet[2764]: I1027 16:21:13.931711 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls469\" (UniqueName: \"kubernetes.io/projected/efa395f4-63b7-48dd-900f-15414929351b-kube-api-access-ls469\") pod \"csi-node-driver-wtl2m\" (UID: \"efa395f4-63b7-48dd-900f-15414929351b\") " pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:13.932030 kubelet[2764]: E1027 16:21:13.932007 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.932030 kubelet[2764]: W1027 16:21:13.932027 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.932102 kubelet[2764]: E1027 16:21:13.932042 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.932300 kubelet[2764]: E1027 16:21:13.932283 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.932300 kubelet[2764]: W1027 16:21:13.932298 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.932387 kubelet[2764]: E1027 16:21:13.932308 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.932540 kubelet[2764]: E1027 16:21:13.932524 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.932540 kubelet[2764]: W1027 16:21:13.932538 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.932599 kubelet[2764]: E1027 16:21:13.932549 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.932599 kubelet[2764]: I1027 16:21:13.932572 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/efa395f4-63b7-48dd-900f-15414929351b-registration-dir\") pod \"csi-node-driver-wtl2m\" (UID: \"efa395f4-63b7-48dd-900f-15414929351b\") " pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:13.932831 kubelet[2764]: E1027 16:21:13.932795 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.932831 kubelet[2764]: W1027 16:21:13.932810 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.932831 kubelet[2764]: E1027 16:21:13.932818 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.933081 kubelet[2764]: E1027 16:21:13.933010 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.933081 kubelet[2764]: W1027 16:21:13.933021 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.933081 kubelet[2764]: E1027 16:21:13.933034 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.933406 kubelet[2764]: E1027 16:21:13.933386 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.933406 kubelet[2764]: W1027 16:21:13.933400 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.933406 kubelet[2764]: E1027 16:21:13.933410 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.933591 kubelet[2764]: E1027 16:21:13.933573 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:13.933591 kubelet[2764]: W1027 16:21:13.933584 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:13.933591 kubelet[2764]: E1027 16:21:13.933591 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:13.993219 kubelet[2764]: E1027 16:21:13.992543 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:13.993511 containerd[1621]: time="2025-10-27T16:21:13.993462181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbqbv,Uid:aa7cc05c-700a-4827-86c0-fb48d23f1f97,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:14.021916 containerd[1621]: time="2025-10-27T16:21:14.021868312Z" level=info msg="connecting to shim 482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254" address="unix:///run/containerd/s/d0e9c11b116869f3646262f241efa6155e2b315fd2e6daba1538362776d3035f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:14.033903 kubelet[2764]: E1027 16:21:14.033754 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.033903 kubelet[2764]: W1027 16:21:14.033774 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.033903 kubelet[2764]: E1027 16:21:14.033795 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.034117 kubelet[2764]: E1027 16:21:14.034104 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.034239 kubelet[2764]: W1027 16:21:14.034205 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.034375 kubelet[2764]: E1027 16:21:14.034284 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.034711 kubelet[2764]: E1027 16:21:14.034676 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.034711 kubelet[2764]: W1027 16:21:14.034688 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.034711 kubelet[2764]: E1027 16:21:14.034698 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.035050 kubelet[2764]: E1027 16:21:14.035038 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.035182 kubelet[2764]: W1027 16:21:14.035101 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.035182 kubelet[2764]: E1027 16:21:14.035114 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.035696 kubelet[2764]: E1027 16:21:14.035535 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.035696 kubelet[2764]: W1027 16:21:14.035547 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.035696 kubelet[2764]: E1027 16:21:14.035559 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.035923 kubelet[2764]: E1027 16:21:14.035848 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.035923 kubelet[2764]: W1027 16:21:14.035860 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.035923 kubelet[2764]: E1027 16:21:14.035869 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.036230 kubelet[2764]: E1027 16:21:14.036216 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.036409 kubelet[2764]: W1027 16:21:14.036290 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.036409 kubelet[2764]: E1027 16:21:14.036305 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.036598 kubelet[2764]: E1027 16:21:14.036585 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.036680 kubelet[2764]: W1027 16:21:14.036666 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.036748 kubelet[2764]: E1027 16:21:14.036736 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.037070 kubelet[2764]: E1027 16:21:14.037002 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.037070 kubelet[2764]: W1027 16:21:14.037013 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.037070 kubelet[2764]: E1027 16:21:14.037023 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.037370 kubelet[2764]: E1027 16:21:14.037358 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.037437 kubelet[2764]: W1027 16:21:14.037426 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.037496 kubelet[2764]: E1027 16:21:14.037484 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.037748 kubelet[2764]: E1027 16:21:14.037684 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.037748 kubelet[2764]: W1027 16:21:14.037694 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.037748 kubelet[2764]: E1027 16:21:14.037703 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.037999 kubelet[2764]: E1027 16:21:14.037987 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.038141 kubelet[2764]: W1027 16:21:14.038043 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.038141 kubelet[2764]: E1027 16:21:14.038056 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.038323 kubelet[2764]: E1027 16:21:14.038311 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.038373 kubelet[2764]: W1027 16:21:14.038362 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.038421 kubelet[2764]: E1027 16:21:14.038411 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.038697 kubelet[2764]: E1027 16:21:14.038643 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.038697 kubelet[2764]: W1027 16:21:14.038654 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.038697 kubelet[2764]: E1027 16:21:14.038663 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.039047 kubelet[2764]: E1027 16:21:14.038981 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.039047 kubelet[2764]: W1027 16:21:14.038992 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.039047 kubelet[2764]: E1027 16:21:14.039002 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.039368 kubelet[2764]: E1027 16:21:14.039303 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.039368 kubelet[2764]: W1027 16:21:14.039315 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.039368 kubelet[2764]: E1027 16:21:14.039325 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.039684 kubelet[2764]: E1027 16:21:14.039622 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.039684 kubelet[2764]: W1027 16:21:14.039633 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.039684 kubelet[2764]: E1027 16:21:14.039642 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.040022 kubelet[2764]: E1027 16:21:14.039932 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.040022 kubelet[2764]: W1027 16:21:14.039943 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.040022 kubelet[2764]: E1027 16:21:14.039974 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.040326 kubelet[2764]: E1027 16:21:14.040313 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.040442 kubelet[2764]: W1027 16:21:14.040384 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.040442 kubelet[2764]: E1027 16:21:14.040397 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.040661 kubelet[2764]: E1027 16:21:14.040650 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.040829 kubelet[2764]: W1027 16:21:14.040706 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.040829 kubelet[2764]: E1027 16:21:14.040718 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.040965 kubelet[2764]: E1027 16:21:14.040954 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.041014 kubelet[2764]: W1027 16:21:14.041004 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.041131 kubelet[2764]: E1027 16:21:14.041059 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.041392 kubelet[2764]: E1027 16:21:14.041380 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.041570 kubelet[2764]: W1027 16:21:14.041450 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.041570 kubelet[2764]: E1027 16:21:14.041465 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.041872 kubelet[2764]: E1027 16:21:14.041715 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.042094 kubelet[2764]: W1027 16:21:14.041918 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.043228 kubelet[2764]: E1027 16:21:14.042391 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.043392 kubelet[2764]: E1027 16:21:14.043344 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.043392 kubelet[2764]: W1027 16:21:14.043369 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.043392 kubelet[2764]: E1027 16:21:14.043379 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.043884 kubelet[2764]: E1027 16:21:14.043815 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.043884 kubelet[2764]: W1027 16:21:14.043827 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.043884 kubelet[2764]: E1027 16:21:14.043836 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.046552 systemd[1]: Started cri-containerd-482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254.scope - libcontainer container 482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254. Oct 27 16:21:14.051050 kubelet[2764]: E1027 16:21:14.050988 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:14.051050 kubelet[2764]: W1027 16:21:14.051005 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:14.051050 kubelet[2764]: E1027 16:21:14.051017 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:14.077783 containerd[1621]: time="2025-10-27T16:21:14.077672057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbqbv,Uid:aa7cc05c-700a-4827-86c0-fb48d23f1f97,Namespace:calico-system,Attempt:0,} returns sandbox id \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\"" Oct 27 16:21:14.078899 kubelet[2764]: E1027 16:21:14.078811 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:15.905982 kubelet[2764]: E1027 16:21:15.905771 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:16.580789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631654770.mount: Deactivated successfully. Oct 27 16:21:17.362019 containerd[1621]: time="2025-10-27T16:21:17.361927137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:17.363571 containerd[1621]: time="2025-10-27T16:21:17.363536477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Oct 27 16:21:17.364643 containerd[1621]: time="2025-10-27T16:21:17.364578997Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:17.367329 containerd[1621]: time="2025-10-27T16:21:17.367255243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:17.367734 containerd[1621]: time="2025-10-27T16:21:17.367689353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.453357347s" Oct 27 16:21:17.367787 containerd[1621]: time="2025-10-27T16:21:17.367732746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 16:21:17.370354 containerd[1621]: time="2025-10-27T16:21:17.370270430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 16:21:17.390511 containerd[1621]: time="2025-10-27T16:21:17.390453391Z" level=info msg="CreateContainer within sandbox \"e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 16:21:17.403742 containerd[1621]: time="2025-10-27T16:21:17.403690220Z" level=info msg="Container 834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:17.417995 containerd[1621]: time="2025-10-27T16:21:17.417309300Z" level=info msg="CreateContainer within sandbox \"e439836921470dbbdc684b1bb52440a36bc9e044b71eff707b91c20420767d63\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e\"" Oct 27 16:21:17.419189 containerd[1621]: time="2025-10-27T16:21:17.418691491Z" level=info msg="StartContainer for \"834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e\"" Oct 27 16:21:17.421314 containerd[1621]: time="2025-10-27T16:21:17.421291573Z" level=info msg="connecting to shim 834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e" address="unix:///run/containerd/s/a640fb1a6f58e4587dafe027ba31305228401f260df8b61c41896878fc6da2c8" protocol=ttrpc version=3 Oct 27 16:21:17.471460 systemd[1]: Started cri-containerd-834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e.scope - libcontainer container 834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e. Oct 27 16:21:17.535449 containerd[1621]: time="2025-10-27T16:21:17.535314075Z" level=info msg="StartContainer for \"834ed4da3caabefbfafa3fb4fcaff1ba3f096109f9f6b52efd8d23e04735b17e\" returns successfully" Oct 27 16:21:17.905920 kubelet[2764]: E1027 16:21:17.905855 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:17.980859 kubelet[2764]: E1027 16:21:17.980796 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:18.048858 kubelet[2764]: E1027 16:21:18.048818 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.048858 kubelet[2764]: W1027 16:21:18.048844 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.048858 kubelet[2764]: E1027 16:21:18.048870 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.049225 kubelet[2764]: E1027 16:21:18.049206 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.049225 kubelet[2764]: W1027 16:21:18.049219 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.049225 kubelet[2764]: E1027 16:21:18.049231 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.049509 kubelet[2764]: E1027 16:21:18.049471 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.049509 kubelet[2764]: W1027 16:21:18.049485 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.049509 kubelet[2764]: E1027 16:21:18.049497 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050199 kubelet[2764]: E1027 16:21:18.049789 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050199 kubelet[2764]: W1027 16:21:18.049800 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.050199 kubelet[2764]: E1027 16:21:18.049810 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050199 kubelet[2764]: E1027 16:21:18.049994 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050199 kubelet[2764]: W1027 16:21:18.050002 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.050199 kubelet[2764]: E1027 16:21:18.050011 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050352 kubelet[2764]: E1027 16:21:18.050287 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050352 kubelet[2764]: W1027 16:21:18.050298 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.050352 kubelet[2764]: E1027 16:21:18.050310 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050545 kubelet[2764]: E1027 16:21:18.050516 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050578 kubelet[2764]: W1027 16:21:18.050532 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.050578 kubelet[2764]: E1027 16:21:18.050556 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050765 kubelet[2764]: E1027 16:21:18.050746 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050765 kubelet[2764]: W1027 16:21:18.050758 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.050816 kubelet[2764]: E1027 16:21:18.050770 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.050964 kubelet[2764]: E1027 16:21:18.050948 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.050964 kubelet[2764]: W1027 16:21:18.050959 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.051012 kubelet[2764]: E1027 16:21:18.050970 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.051182 kubelet[2764]: E1027 16:21:18.051138 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.051182 kubelet[2764]: W1027 16:21:18.051150 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.051246 kubelet[2764]: E1027 16:21:18.051189 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.051411 kubelet[2764]: E1027 16:21:18.051381 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.051411 kubelet[2764]: W1027 16:21:18.051396 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.051411 kubelet[2764]: E1027 16:21:18.051405 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.051608 kubelet[2764]: E1027 16:21:18.051591 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.051608 kubelet[2764]: W1027 16:21:18.051603 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.051657 kubelet[2764]: E1027 16:21:18.051613 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.051850 kubelet[2764]: E1027 16:21:18.051828 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.051850 kubelet[2764]: W1027 16:21:18.051842 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.051850 kubelet[2764]: E1027 16:21:18.051852 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.052040 kubelet[2764]: E1027 16:21:18.052031 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.052063 kubelet[2764]: W1027 16:21:18.052040 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.052063 kubelet[2764]: E1027 16:21:18.052050 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.052285 kubelet[2764]: E1027 16:21:18.052263 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.052285 kubelet[2764]: W1027 16:21:18.052277 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.052285 kubelet[2764]: E1027 16:21:18.052287 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.067869 kubelet[2764]: E1027 16:21:18.067823 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.067869 kubelet[2764]: W1027 16:21:18.067854 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.067944 kubelet[2764]: E1027 16:21:18.067879 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.068124 kubelet[2764]: E1027 16:21:18.068091 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.068124 kubelet[2764]: W1027 16:21:18.068104 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.068124 kubelet[2764]: E1027 16:21:18.068123 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.068402 kubelet[2764]: E1027 16:21:18.068373 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.068402 kubelet[2764]: W1027 16:21:18.068389 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.068402 kubelet[2764]: E1027 16:21:18.068400 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.068641 kubelet[2764]: E1027 16:21:18.068613 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.068641 kubelet[2764]: W1027 16:21:18.068631 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.068694 kubelet[2764]: E1027 16:21:18.068642 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.068811 kubelet[2764]: E1027 16:21:18.068796 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.068811 kubelet[2764]: W1027 16:21:18.068806 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.068853 kubelet[2764]: E1027 16:21:18.068814 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.068978 kubelet[2764]: E1027 16:21:18.068962 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.068978 kubelet[2764]: W1027 16:21:18.068972 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.069025 kubelet[2764]: E1027 16:21:18.068981 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.069213 kubelet[2764]: E1027 16:21:18.069195 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.069213 kubelet[2764]: W1027 16:21:18.069207 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.069258 kubelet[2764]: E1027 16:21:18.069216 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.069537 kubelet[2764]: E1027 16:21:18.069516 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.069537 kubelet[2764]: W1027 16:21:18.069533 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.069602 kubelet[2764]: E1027 16:21:18.069545 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.069728 kubelet[2764]: E1027 16:21:18.069712 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.069728 kubelet[2764]: W1027 16:21:18.069724 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.069771 kubelet[2764]: E1027 16:21:18.069735 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.069914 kubelet[2764]: E1027 16:21:18.069898 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.069914 kubelet[2764]: W1027 16:21:18.069910 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.069966 kubelet[2764]: E1027 16:21:18.069920 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.070094 kubelet[2764]: E1027 16:21:18.070075 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.070094 kubelet[2764]: W1027 16:21:18.070088 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.070177 kubelet[2764]: E1027 16:21:18.070096 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.070328 kubelet[2764]: E1027 16:21:18.070310 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.070328 kubelet[2764]: W1027 16:21:18.070323 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.070384 kubelet[2764]: E1027 16:21:18.070333 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.070533 kubelet[2764]: E1027 16:21:18.070516 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.070533 kubelet[2764]: W1027 16:21:18.070528 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.070580 kubelet[2764]: E1027 16:21:18.070539 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.070878 kubelet[2764]: E1027 16:21:18.070849 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.070878 kubelet[2764]: W1027 16:21:18.070867 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.070878 kubelet[2764]: E1027 16:21:18.070878 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.071125 kubelet[2764]: E1027 16:21:18.071101 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.071168 kubelet[2764]: W1027 16:21:18.071124 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.071168 kubelet[2764]: E1027 16:21:18.071136 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.071444 kubelet[2764]: E1027 16:21:18.071418 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.071444 kubelet[2764]: W1027 16:21:18.071434 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.071503 kubelet[2764]: E1027 16:21:18.071445 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.071730 kubelet[2764]: E1027 16:21:18.071711 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.071730 kubelet[2764]: W1027 16:21:18.071726 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.071776 kubelet[2764]: E1027 16:21:18.071738 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.072439 kubelet[2764]: E1027 16:21:18.072412 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:18.072467 kubelet[2764]: W1027 16:21:18.072443 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:18.072467 kubelet[2764]: E1027 16:21:18.072456 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:18.976903 kubelet[2764]: I1027 16:21:18.976865 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 16:21:18.977319 kubelet[2764]: E1027 16:21:18.977214 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:19.058511 kubelet[2764]: E1027 16:21:19.058475 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.058511 kubelet[2764]: W1027 16:21:19.058497 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.058511 kubelet[2764]: E1027 16:21:19.058521 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.058759 kubelet[2764]: E1027 16:21:19.058739 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.058759 kubelet[2764]: W1027 16:21:19.058751 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.058829 kubelet[2764]: E1027 16:21:19.058763 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.058998 kubelet[2764]: E1027 16:21:19.058970 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.058998 kubelet[2764]: W1027 16:21:19.058982 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.058998 kubelet[2764]: E1027 16:21:19.058993 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.059243 kubelet[2764]: E1027 16:21:19.059221 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.059243 kubelet[2764]: W1027 16:21:19.059235 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.059320 kubelet[2764]: E1027 16:21:19.059248 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.059473 kubelet[2764]: E1027 16:21:19.059455 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.059473 kubelet[2764]: W1027 16:21:19.059467 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.059564 kubelet[2764]: E1027 16:21:19.059477 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.059699 kubelet[2764]: E1027 16:21:19.059682 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.059699 kubelet[2764]: W1027 16:21:19.059694 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.059797 kubelet[2764]: E1027 16:21:19.059704 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.059949 kubelet[2764]: E1027 16:21:19.059914 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.059949 kubelet[2764]: W1027 16:21:19.059931 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.059949 kubelet[2764]: E1027 16:21:19.059943 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.060198 kubelet[2764]: E1027 16:21:19.060179 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.060198 kubelet[2764]: W1027 16:21:19.060194 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.060288 kubelet[2764]: E1027 16:21:19.060206 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.060425 kubelet[2764]: E1027 16:21:19.060412 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.060425 kubelet[2764]: W1027 16:21:19.060423 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.060471 kubelet[2764]: E1027 16:21:19.060432 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.060644 kubelet[2764]: E1027 16:21:19.060611 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.060644 kubelet[2764]: W1027 16:21:19.060622 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.060644 kubelet[2764]: E1027 16:21:19.060632 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.060836 kubelet[2764]: E1027 16:21:19.060818 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.060836 kubelet[2764]: W1027 16:21:19.060830 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.060917 kubelet[2764]: E1027 16:21:19.060843 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.061057 kubelet[2764]: E1027 16:21:19.061037 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.061057 kubelet[2764]: W1027 16:21:19.061049 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.061141 kubelet[2764]: E1027 16:21:19.061061 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.061315 kubelet[2764]: E1027 16:21:19.061298 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.061315 kubelet[2764]: W1027 16:21:19.061309 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.061391 kubelet[2764]: E1027 16:21:19.061320 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.061531 kubelet[2764]: E1027 16:21:19.061513 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.061531 kubelet[2764]: W1027 16:21:19.061525 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.061604 kubelet[2764]: E1027 16:21:19.061535 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.061768 kubelet[2764]: E1027 16:21:19.061740 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.061768 kubelet[2764]: W1027 16:21:19.061753 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.061768 kubelet[2764]: E1027 16:21:19.061764 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.075137 kubelet[2764]: E1027 16:21:19.075084 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.075137 kubelet[2764]: W1027 16:21:19.075131 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.075232 kubelet[2764]: E1027 16:21:19.075169 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.075427 kubelet[2764]: E1027 16:21:19.075389 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.075427 kubelet[2764]: W1027 16:21:19.075411 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.075427 kubelet[2764]: E1027 16:21:19.075420 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.075682 kubelet[2764]: E1027 16:21:19.075651 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.075682 kubelet[2764]: W1027 16:21:19.075666 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.075682 kubelet[2764]: E1027 16:21:19.075678 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.075916 kubelet[2764]: E1027 16:21:19.075888 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.075916 kubelet[2764]: W1027 16:21:19.075900 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.075916 kubelet[2764]: E1027 16:21:19.075911 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.076166 kubelet[2764]: E1027 16:21:19.076131 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.076166 kubelet[2764]: W1027 16:21:19.076143 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.076244 kubelet[2764]: E1027 16:21:19.076171 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.076429 kubelet[2764]: E1027 16:21:19.076405 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.076429 kubelet[2764]: W1027 16:21:19.076418 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.076429 kubelet[2764]: E1027 16:21:19.076429 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.076728 kubelet[2764]: E1027 16:21:19.076711 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.076728 kubelet[2764]: W1027 16:21:19.076724 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.076817 kubelet[2764]: E1027 16:21:19.076734 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.076960 kubelet[2764]: E1027 16:21:19.076941 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.076960 kubelet[2764]: W1027 16:21:19.076954 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.077042 kubelet[2764]: E1027 16:21:19.076975 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.077216 kubelet[2764]: E1027 16:21:19.077195 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.077216 kubelet[2764]: W1027 16:21:19.077207 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.077216 kubelet[2764]: E1027 16:21:19.077218 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.077457 kubelet[2764]: E1027 16:21:19.077436 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.077457 kubelet[2764]: W1027 16:21:19.077453 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.077557 kubelet[2764]: E1027 16:21:19.077467 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.077742 kubelet[2764]: E1027 16:21:19.077723 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.077742 kubelet[2764]: W1027 16:21:19.077737 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.077835 kubelet[2764]: E1027 16:21:19.077749 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.078220 kubelet[2764]: E1027 16:21:19.078200 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.078220 kubelet[2764]: W1027 16:21:19.078213 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.078301 kubelet[2764]: E1027 16:21:19.078225 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.078457 kubelet[2764]: E1027 16:21:19.078432 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.078457 kubelet[2764]: W1027 16:21:19.078454 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.078534 kubelet[2764]: E1027 16:21:19.078468 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.078704 kubelet[2764]: E1027 16:21:19.078685 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.078704 kubelet[2764]: W1027 16:21:19.078697 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.078776 kubelet[2764]: E1027 16:21:19.078708 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.078926 kubelet[2764]: E1027 16:21:19.078907 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.078926 kubelet[2764]: W1027 16:21:19.078919 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.079004 kubelet[2764]: E1027 16:21:19.078929 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.079174 kubelet[2764]: E1027 16:21:19.079137 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.079174 kubelet[2764]: W1027 16:21:19.079149 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.079262 kubelet[2764]: E1027 16:21:19.079181 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.079408 kubelet[2764]: E1027 16:21:19.079391 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.079408 kubelet[2764]: W1027 16:21:19.079403 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.079482 kubelet[2764]: E1027 16:21:19.079414 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.079833 kubelet[2764]: E1027 16:21:19.079813 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 16:21:19.079833 kubelet[2764]: W1027 16:21:19.079825 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 16:21:19.079909 kubelet[2764]: E1027 16:21:19.079836 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 16:21:19.905296 kubelet[2764]: E1027 16:21:19.905234 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:20.295727 containerd[1621]: time="2025-10-27T16:21:20.295651732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:20.296687 containerd[1621]: time="2025-10-27T16:21:20.296650127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Oct 27 16:21:20.297946 containerd[1621]: time="2025-10-27T16:21:20.297912829Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:20.299822 containerd[1621]: time="2025-10-27T16:21:20.299788168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:20.300316 containerd[1621]: time="2025-10-27T16:21:20.300285946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.929935556s" Oct 27 16:21:20.300316 containerd[1621]: time="2025-10-27T16:21:20.300311936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 16:21:20.309372 containerd[1621]: time="2025-10-27T16:21:20.309315975Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 16:21:20.317823 containerd[1621]: time="2025-10-27T16:21:20.317798680Z" level=info msg="Container 5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:20.326100 containerd[1621]: time="2025-10-27T16:21:20.326051673Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13\"" Oct 27 16:21:20.326613 containerd[1621]: time="2025-10-27T16:21:20.326575421Z" level=info msg="StartContainer for \"5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13\"" Oct 27 16:21:20.327938 containerd[1621]: time="2025-10-27T16:21:20.327913164Z" level=info msg="connecting to shim 5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13" address="unix:///run/containerd/s/d0e9c11b116869f3646262f241efa6155e2b315fd2e6daba1538362776d3035f" protocol=ttrpc version=3 Oct 27 16:21:20.356311 systemd[1]: Started cri-containerd-5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13.scope - libcontainer container 5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13. Oct 27 16:21:20.398812 containerd[1621]: time="2025-10-27T16:21:20.398771222Z" level=info msg="StartContainer for \"5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13\" returns successfully" Oct 27 16:21:20.411301 systemd[1]: cri-containerd-5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13.scope: Deactivated successfully. Oct 27 16:21:20.412977 containerd[1621]: time="2025-10-27T16:21:20.412946819Z" level=info msg="received exit event container_id:\"5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13\" id:\"5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13\" pid:3506 exited_at:{seconds:1761582080 nanos:412519002}" Oct 27 16:21:20.436745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c57c1729af22923f5f7fa5a6455e495806283b6364eac4d30c96abc67672e13-rootfs.mount: Deactivated successfully. Oct 27 16:21:20.983306 kubelet[2764]: E1027 16:21:20.983258 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:20.984551 containerd[1621]: time="2025-10-27T16:21:20.984513673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 16:21:21.089279 kubelet[2764]: I1027 16:21:21.089205 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b8d96bc64-s4lrj" podStartSLOduration=4.634432232 podStartE2EDuration="8.089142046s" podCreationTimestamp="2025-10-27 16:21:13 +0000 UTC" firstStartedPulling="2025-10-27 16:21:13.913954121 +0000 UTC m=+20.108285407" lastFinishedPulling="2025-10-27 16:21:17.368663935 +0000 UTC m=+23.562995221" observedRunningTime="2025-10-27 16:21:18.393828534 +0000 UTC m=+24.588159820" watchObservedRunningTime="2025-10-27 16:21:21.089142046 +0000 UTC m=+27.283473332" Oct 27 16:21:21.905573 kubelet[2764]: E1027 16:21:21.905494 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:23.905709 kubelet[2764]: E1027 16:21:23.905646 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:25.907213 kubelet[2764]: E1027 16:21:25.907120 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:26.517217 containerd[1621]: time="2025-10-27T16:21:26.517147755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:26.518102 containerd[1621]: time="2025-10-27T16:21:26.518067457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Oct 27 16:21:26.519145 containerd[1621]: time="2025-10-27T16:21:26.519118517Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:26.521115 containerd[1621]: time="2025-10-27T16:21:26.521088316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:26.521648 containerd[1621]: time="2025-10-27T16:21:26.521623784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.537065276s" Oct 27 16:21:26.521688 containerd[1621]: time="2025-10-27T16:21:26.521651416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 16:21:26.525631 containerd[1621]: time="2025-10-27T16:21:26.525602036Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 16:21:26.535877 containerd[1621]: time="2025-10-27T16:21:26.535821271Z" level=info msg="Container 821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:26.544247 containerd[1621]: time="2025-10-27T16:21:26.544189669Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066\"" Oct 27 16:21:26.544696 containerd[1621]: time="2025-10-27T16:21:26.544666868Z" level=info msg="StartContainer for \"821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066\"" Oct 27 16:21:26.546284 containerd[1621]: time="2025-10-27T16:21:26.546258185Z" level=info msg="connecting to shim 821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066" address="unix:///run/containerd/s/d0e9c11b116869f3646262f241efa6155e2b315fd2e6daba1538362776d3035f" protocol=ttrpc version=3 Oct 27 16:21:26.572316 systemd[1]: Started cri-containerd-821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066.scope - libcontainer container 821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066. Oct 27 16:21:26.651725 containerd[1621]: time="2025-10-27T16:21:26.651676053Z" level=info msg="StartContainer for \"821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066\" returns successfully" Oct 27 16:21:26.996625 kubelet[2764]: E1027 16:21:26.996562 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:27.905432 kubelet[2764]: E1027 16:21:27.905369 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:27.998641 kubelet[2764]: E1027 16:21:27.998599 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:28.210344 systemd[1]: cri-containerd-821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066.scope: Deactivated successfully. Oct 27 16:21:28.210978 systemd[1]: cri-containerd-821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066.scope: Consumed 668ms CPU time, 176.7M memory peak, 4.4M read from disk, 171.3M written to disk. Oct 27 16:21:28.211423 containerd[1621]: time="2025-10-27T16:21:28.211379465Z" level=info msg="received exit event container_id:\"821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066\" id:\"821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066\" pid:3566 exited_at:{seconds:1761582088 nanos:211083608}" Oct 27 16:21:28.236637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-821db0f760c47ff1fd13b6b480aac5c81dbb169f098732c5a4b824e0d9c56066-rootfs.mount: Deactivated successfully. Oct 27 16:21:28.287762 kubelet[2764]: I1027 16:21:28.287715 2764 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 27 16:21:28.845019 systemd[1]: Created slice kubepods-besteffort-pod2092531b_55e7_4501_8eb9_15912d303128.slice - libcontainer container kubepods-besteffort-pod2092531b_55e7_4501_8eb9_15912d303128.slice. Oct 27 16:21:28.882802 systemd[1]: Created slice kubepods-besteffort-pod7dbd484d_304e_4f95_8ee6_738c940331ca.slice - libcontainer container kubepods-besteffort-pod7dbd484d_304e_4f95_8ee6_738c940331ca.slice. Oct 27 16:21:28.913352 systemd[1]: Created slice kubepods-burstable-podb94d81d5_ee9f_418b_a5a5_65a18ed654f2.slice - libcontainer container kubepods-burstable-podb94d81d5_ee9f_418b_a5a5_65a18ed654f2.slice. Oct 27 16:21:28.920550 systemd[1]: Created slice kubepods-burstable-pod8eb6d556_0299_408d_b34c_9332a4f8317d.slice - libcontainer container kubepods-burstable-pod8eb6d556_0299_408d_b34c_9332a4f8317d.slice. Oct 27 16:21:28.927904 systemd[1]: Created slice kubepods-besteffort-pod5566825d_6dfc_4a28_b349_7aea4d744119.slice - libcontainer container kubepods-besteffort-pod5566825d_6dfc_4a28_b349_7aea4d744119.slice. Oct 27 16:21:28.934347 systemd[1]: Created slice kubepods-besteffort-pod1364f957_566a_4e9b_a994_8a554341484a.slice - libcontainer container kubepods-besteffort-pod1364f957_566a_4e9b_a994_8a554341484a.slice. Oct 27 16:21:28.938926 kubelet[2764]: I1027 16:21:28.938884 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7dbd484d-304e-4f95-8ee6-738c940331ca-goldmane-key-pair\") pod \"goldmane-7c778bb748-z78l5\" (UID: \"7dbd484d-304e-4f95-8ee6-738c940331ca\") " pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:28.939008 kubelet[2764]: I1027 16:21:28.938930 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b94d81d5-ee9f-418b-a5a5-65a18ed654f2-config-volume\") pod \"coredns-66bc5c9577-jjnzr\" (UID: \"b94d81d5-ee9f-418b-a5a5-65a18ed654f2\") " pod="kube-system/coredns-66bc5c9577-jjnzr" Oct 27 16:21:28.939008 kubelet[2764]: I1027 16:21:28.938952 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkp4\" (UniqueName: \"kubernetes.io/projected/8eb6d556-0299-408d-b34c-9332a4f8317d-kube-api-access-ljkp4\") pod \"coredns-66bc5c9577-g4ljl\" (UID: \"8eb6d556-0299-408d-b34c-9332a4f8317d\") " pod="kube-system/coredns-66bc5c9577-g4ljl" Oct 27 16:21:28.939008 kubelet[2764]: I1027 16:21:28.938968 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1364f957-566a-4e9b-a994-8a554341484a-tigera-ca-bundle\") pod \"calico-kube-controllers-67bbccbd4-s5p5q\" (UID: \"1364f957-566a-4e9b-a994-8a554341484a\") " pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" Oct 27 16:21:28.939008 kubelet[2764]: I1027 16:21:28.939002 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dbd484d-304e-4f95-8ee6-738c940331ca-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-z78l5\" (UID: \"7dbd484d-304e-4f95-8ee6-738c940331ca\") " pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:28.939114 kubelet[2764]: I1027 16:21:28.939017 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eb6d556-0299-408d-b34c-9332a4f8317d-config-volume\") pod \"coredns-66bc5c9577-g4ljl\" (UID: \"8eb6d556-0299-408d-b34c-9332a4f8317d\") " pod="kube-system/coredns-66bc5c9577-g4ljl" Oct 27 16:21:28.939114 kubelet[2764]: I1027 16:21:28.939031 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2092531b-55e7-4501-8eb9-15912d303128-whisker-backend-key-pair\") pod \"whisker-579d6d9948-sxnp2\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " pod="calico-system/whisker-579d6d9948-sxnp2" Oct 27 16:21:28.939114 kubelet[2764]: I1027 16:21:28.939049 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbd484d-304e-4f95-8ee6-738c940331ca-config\") pod \"goldmane-7c778bb748-z78l5\" (UID: \"7dbd484d-304e-4f95-8ee6-738c940331ca\") " pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:28.939114 kubelet[2764]: I1027 16:21:28.939065 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2092531b-55e7-4501-8eb9-15912d303128-whisker-ca-bundle\") pod \"whisker-579d6d9948-sxnp2\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " pod="calico-system/whisker-579d6d9948-sxnp2" Oct 27 16:21:28.939114 kubelet[2764]: I1027 16:21:28.939082 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5c04391-35e3-4b29-b954-2c4df3aa5299-calico-apiserver-certs\") pod \"calico-apiserver-7967f997df-jl28h\" (UID: \"d5c04391-35e3-4b29-b954-2c4df3aa5299\") " pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" Oct 27 16:21:28.939255 kubelet[2764]: I1027 16:21:28.939108 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnpjv\" (UniqueName: \"kubernetes.io/projected/d5c04391-35e3-4b29-b954-2c4df3aa5299-kube-api-access-jnpjv\") pod \"calico-apiserver-7967f997df-jl28h\" (UID: \"d5c04391-35e3-4b29-b954-2c4df3aa5299\") " pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" Oct 27 16:21:28.939255 kubelet[2764]: I1027 16:21:28.939139 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5566825d-6dfc-4a28-b349-7aea4d744119-calico-apiserver-certs\") pod \"calico-apiserver-7967f997df-vmzqg\" (UID: \"5566825d-6dfc-4a28-b349-7aea4d744119\") " pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" Oct 27 16:21:28.939255 kubelet[2764]: I1027 16:21:28.939200 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjcz\" (UniqueName: \"kubernetes.io/projected/7dbd484d-304e-4f95-8ee6-738c940331ca-kube-api-access-2mjcz\") pod \"goldmane-7c778bb748-z78l5\" (UID: \"7dbd484d-304e-4f95-8ee6-738c940331ca\") " pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:28.939255 kubelet[2764]: I1027 16:21:28.939217 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87mvz\" (UniqueName: \"kubernetes.io/projected/b94d81d5-ee9f-418b-a5a5-65a18ed654f2-kube-api-access-87mvz\") pod \"coredns-66bc5c9577-jjnzr\" (UID: \"b94d81d5-ee9f-418b-a5a5-65a18ed654f2\") " pod="kube-system/coredns-66bc5c9577-jjnzr" Oct 27 16:21:28.939255 kubelet[2764]: I1027 16:21:28.939231 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xd2\" (UniqueName: \"kubernetes.io/projected/2092531b-55e7-4501-8eb9-15912d303128-kube-api-access-g8xd2\") pod \"whisker-579d6d9948-sxnp2\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " pod="calico-system/whisker-579d6d9948-sxnp2" Oct 27 16:21:28.939381 kubelet[2764]: I1027 16:21:28.939249 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gfg\" (UniqueName: \"kubernetes.io/projected/5566825d-6dfc-4a28-b349-7aea4d744119-kube-api-access-v5gfg\") pod \"calico-apiserver-7967f997df-vmzqg\" (UID: \"5566825d-6dfc-4a28-b349-7aea4d744119\") " pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" Oct 27 16:21:28.939381 kubelet[2764]: I1027 16:21:28.939266 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5cpx\" (UniqueName: \"kubernetes.io/projected/1364f957-566a-4e9b-a994-8a554341484a-kube-api-access-g5cpx\") pod \"calico-kube-controllers-67bbccbd4-s5p5q\" (UID: \"1364f957-566a-4e9b-a994-8a554341484a\") " pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" Oct 27 16:21:28.939590 systemd[1]: Created slice kubepods-besteffort-podd5c04391_35e3_4b29_b954_2c4df3aa5299.slice - libcontainer container kubepods-besteffort-podd5c04391_35e3_4b29_b954_2c4df3aa5299.slice. Oct 27 16:21:29.003211 kubelet[2764]: E1027 16:21:29.003151 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:29.004140 containerd[1621]: time="2025-10-27T16:21:29.004083878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 16:21:29.151227 containerd[1621]: time="2025-10-27T16:21:29.151080132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579d6d9948-sxnp2,Uid:2092531b-55e7-4501-8eb9-15912d303128,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:29.189103 containerd[1621]: time="2025-10-27T16:21:29.189052497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z78l5,Uid:7dbd484d-304e-4f95-8ee6-738c940331ca,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:29.219960 kubelet[2764]: E1027 16:21:29.219894 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:29.220609 containerd[1621]: time="2025-10-27T16:21:29.220544168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jjnzr,Uid:b94d81d5-ee9f-418b-a5a5-65a18ed654f2,Namespace:kube-system,Attempt:0,}" Oct 27 16:21:29.226919 kubelet[2764]: E1027 16:21:29.226858 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:29.234085 containerd[1621]: time="2025-10-27T16:21:29.233620314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4ljl,Uid:8eb6d556-0299-408d-b34c-9332a4f8317d,Namespace:kube-system,Attempt:0,}" Oct 27 16:21:29.234405 containerd[1621]: time="2025-10-27T16:21:29.234371298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-vmzqg,Uid:5566825d-6dfc-4a28-b349-7aea4d744119,Namespace:calico-apiserver,Attempt:0,}" Oct 27 16:21:29.243132 containerd[1621]: time="2025-10-27T16:21:29.242776915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bbccbd4-s5p5q,Uid:1364f957-566a-4e9b-a994-8a554341484a,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:29.245259 containerd[1621]: time="2025-10-27T16:21:29.245238808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-jl28h,Uid:d5c04391-35e3-4b29-b954-2c4df3aa5299,Namespace:calico-apiserver,Attempt:0,}" Oct 27 16:21:29.299775 containerd[1621]: time="2025-10-27T16:21:29.299642153Z" level=error msg="Failed to destroy network for sandbox \"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.308630 containerd[1621]: time="2025-10-27T16:21:29.308565004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579d6d9948-sxnp2,Uid:2092531b-55e7-4501-8eb9-15912d303128,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.320484 kubelet[2764]: E1027 16:21:29.319989 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.320484 kubelet[2764]: E1027 16:21:29.320072 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579d6d9948-sxnp2" Oct 27 16:21:29.320484 kubelet[2764]: E1027 16:21:29.320098 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579d6d9948-sxnp2" Oct 27 16:21:29.320728 kubelet[2764]: E1027 16:21:29.320200 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-579d6d9948-sxnp2_calico-system(2092531b-55e7-4501-8eb9-15912d303128)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-579d6d9948-sxnp2_calico-system(2092531b-55e7-4501-8eb9-15912d303128)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2b9b2813613ddd56206ade73276f43e9930e4fcf879254e5d4cb56efa38b642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-579d6d9948-sxnp2" podUID="2092531b-55e7-4501-8eb9-15912d303128" Oct 27 16:21:29.341701 containerd[1621]: time="2025-10-27T16:21:29.341641316Z" level=error msg="Failed to destroy network for sandbox \"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.344587 containerd[1621]: time="2025-10-27T16:21:29.344487212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z78l5,Uid:7dbd484d-304e-4f95-8ee6-738c940331ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.344909 kubelet[2764]: E1027 16:21:29.344855 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.345011 kubelet[2764]: E1027 16:21:29.344984 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:29.345050 kubelet[2764]: E1027 16:21:29.345013 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-z78l5" Oct 27 16:21:29.345340 kubelet[2764]: E1027 16:21:29.345296 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-z78l5_calico-system(7dbd484d-304e-4f95-8ee6-738c940331ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-z78l5_calico-system(7dbd484d-304e-4f95-8ee6-738c940331ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f92fd84041414d27c5bb312750f07c81514107cab605907aa53fc00e87bbeba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:21:29.358375 containerd[1621]: time="2025-10-27T16:21:29.358313079Z" level=error msg="Failed to destroy network for sandbox \"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.361011 containerd[1621]: time="2025-10-27T16:21:29.360970590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jjnzr,Uid:b94d81d5-ee9f-418b-a5a5-65a18ed654f2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.361421 kubelet[2764]: E1027 16:21:29.361371 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.361478 kubelet[2764]: E1027 16:21:29.361447 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jjnzr" Oct 27 16:21:29.361478 kubelet[2764]: E1027 16:21:29.361468 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jjnzr" Oct 27 16:21:29.361561 kubelet[2764]: E1027 16:21:29.361529 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jjnzr_kube-system(b94d81d5-ee9f-418b-a5a5-65a18ed654f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jjnzr_kube-system(b94d81d5-ee9f-418b-a5a5-65a18ed654f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6ed793c07ba022ecd892f90bddc27731d0e57dc2888f0e68754e06939003c16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jjnzr" podUID="b94d81d5-ee9f-418b-a5a5-65a18ed654f2" Oct 27 16:21:29.371406 containerd[1621]: time="2025-10-27T16:21:29.371325064Z" level=error msg="Failed to destroy network for sandbox \"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.373588 containerd[1621]: time="2025-10-27T16:21:29.373543029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4ljl,Uid:8eb6d556-0299-408d-b34c-9332a4f8317d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.373917 kubelet[2764]: E1027 16:21:29.373794 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.373917 kubelet[2764]: E1027 16:21:29.373864 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4ljl" Oct 27 16:21:29.373917 kubelet[2764]: E1027 16:21:29.373885 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4ljl" Oct 27 16:21:29.374027 kubelet[2764]: E1027 16:21:29.373957 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g4ljl_kube-system(8eb6d556-0299-408d-b34c-9332a4f8317d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g4ljl_kube-system(8eb6d556-0299-408d-b34c-9332a4f8317d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"599818ef931abc63e0fbdce35330cfbce735023bf11aeab5ce362df621353b4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g4ljl" podUID="8eb6d556-0299-408d-b34c-9332a4f8317d" Oct 27 16:21:29.375196 containerd[1621]: time="2025-10-27T16:21:29.375062768Z" level=error msg="Failed to destroy network for sandbox \"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.376846 containerd[1621]: time="2025-10-27T16:21:29.376821779Z" level=error msg="Failed to destroy network for sandbox \"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.377546 containerd[1621]: time="2025-10-27T16:21:29.377513470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-jl28h,Uid:d5c04391-35e3-4b29-b954-2c4df3aa5299,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.377829 kubelet[2764]: E1027 16:21:29.377805 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.377885 kubelet[2764]: E1027 16:21:29.377847 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" Oct 27 16:21:29.377885 kubelet[2764]: E1027 16:21:29.377862 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" Oct 27 16:21:29.377951 kubelet[2764]: E1027 16:21:29.377920 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7967f997df-jl28h_calico-apiserver(d5c04391-35e3-4b29-b954-2c4df3aa5299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7967f997df-jl28h_calico-apiserver(d5c04391-35e3-4b29-b954-2c4df3aa5299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d30413c98639dfa8288e53a35863bb619beaebd68a9b38fbd06d3cb2b072a06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:21:29.379709 containerd[1621]: time="2025-10-27T16:21:29.379648758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bbccbd4-s5p5q,Uid:1364f957-566a-4e9b-a994-8a554341484a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.380147 kubelet[2764]: E1027 16:21:29.380115 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.380306 kubelet[2764]: E1027 16:21:29.380183 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" Oct 27 16:21:29.380306 kubelet[2764]: E1027 16:21:29.380204 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" Oct 27 16:21:29.380306 kubelet[2764]: E1027 16:21:29.380263 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67bbccbd4-s5p5q_calico-system(1364f957-566a-4e9b-a994-8a554341484a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67bbccbd4-s5p5q_calico-system(1364f957-566a-4e9b-a994-8a554341484a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55ff158e4c7f72904a24df6d12e60b621838d5a7bedad818bd4b82d3a3f673e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:21:29.385124 containerd[1621]: time="2025-10-27T16:21:29.385070711Z" level=error msg="Failed to destroy network for sandbox \"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.387100 containerd[1621]: time="2025-10-27T16:21:29.387049324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-vmzqg,Uid:5566825d-6dfc-4a28-b349-7aea4d744119,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.387286 kubelet[2764]: E1027 16:21:29.387228 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.387286 kubelet[2764]: E1027 16:21:29.387263 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" Oct 27 16:21:29.387286 kubelet[2764]: E1027 16:21:29.387278 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" Oct 27 16:21:29.387386 kubelet[2764]: E1027 16:21:29.387322 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7967f997df-vmzqg_calico-apiserver(5566825d-6dfc-4a28-b349-7aea4d744119)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7967f997df-vmzqg_calico-apiserver(5566825d-6dfc-4a28-b349-7aea4d744119)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d4273d020bcbd1427e169c89e6bb794d213a70257feaf4ba391b3b5099ed839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:21:29.912512 systemd[1]: Created slice kubepods-besteffort-podefa395f4_63b7_48dd_900f_15414929351b.slice - libcontainer container kubepods-besteffort-podefa395f4_63b7_48dd_900f_15414929351b.slice. Oct 27 16:21:29.917000 containerd[1621]: time="2025-10-27T16:21:29.916955270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtl2m,Uid:efa395f4-63b7-48dd-900f-15414929351b,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:29.985353 containerd[1621]: time="2025-10-27T16:21:29.985276316Z" level=error msg="Failed to destroy network for sandbox \"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.988038 containerd[1621]: time="2025-10-27T16:21:29.987988679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtl2m,Uid:efa395f4-63b7-48dd-900f-15414929351b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.988377 kubelet[2764]: E1027 16:21:29.988305 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 16:21:29.988440 kubelet[2764]: E1027 16:21:29.988391 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:29.988475 kubelet[2764]: E1027 16:21:29.988417 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtl2m" Oct 27 16:21:29.988611 kubelet[2764]: E1027 16:21:29.988530 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5a1e6b452aa1dcbf8481997d2c7327bbc17cfbd4e75cbfb00df78d373f3c49b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:30.237865 systemd[1]: run-netns-cni\x2d33dc900c\x2d1cfe\x2d1e61\x2d7037\x2d244e5ccfea54.mount: Deactivated successfully. Oct 27 16:21:30.237994 systemd[1]: run-netns-cni\x2d18ee5302\x2d6921\x2df670\x2da77f\x2d73deea9e5d7f.mount: Deactivated successfully. Oct 27 16:21:30.238063 systemd[1]: run-netns-cni\x2db04f5ee7\x2d89ff\x2d8162\x2d535e\x2dd27eb54e0eba.mount: Deactivated successfully. Oct 27 16:21:30.238129 systemd[1]: run-netns-cni\x2d4973d412\x2de2dd\x2d535a\x2d21cb\x2d7c21d9b3065b.mount: Deactivated successfully. Oct 27 16:21:30.238218 systemd[1]: run-netns-cni\x2d26be8b89\x2d5dd6\x2dc389\x2dfea0\x2d5be808926629.mount: Deactivated successfully. Oct 27 16:21:30.238287 systemd[1]: run-netns-cni\x2d4b736909\x2d6167\x2d3957\x2d31c7\x2d747b6bb26b1c.mount: Deactivated successfully. Oct 27 16:21:30.238357 systemd[1]: run-netns-cni\x2d3cd842fa\x2d95c6\x2d4f6e\x2d4fbb\x2dbb415486e888.mount: Deactivated successfully. Oct 27 16:21:36.279945 kubelet[2764]: I1027 16:21:36.279887 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 16:21:36.284434 kubelet[2764]: E1027 16:21:36.283517 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:36.665918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495691094.mount: Deactivated successfully. Oct 27 16:21:37.019931 kubelet[2764]: E1027 16:21:37.019898 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:37.183569 containerd[1621]: time="2025-10-27T16:21:37.183499054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:37.201934 containerd[1621]: time="2025-10-27T16:21:37.184468406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Oct 27 16:21:37.201995 containerd[1621]: time="2025-10-27T16:21:37.185850854Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:37.202061 containerd[1621]: time="2025-10-27T16:21:37.188095462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.183952944s" Oct 27 16:21:37.202091 containerd[1621]: time="2025-10-27T16:21:37.202064784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 16:21:37.202609 containerd[1621]: time="2025-10-27T16:21:37.202553242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 16:21:37.233367 containerd[1621]: time="2025-10-27T16:21:37.233294490Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 16:21:37.242085 containerd[1621]: time="2025-10-27T16:21:37.242028563Z" level=info msg="Container 543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:37.253646 containerd[1621]: time="2025-10-27T16:21:37.253596722Z" level=info msg="CreateContainer within sandbox \"482299890f12916aebecdbeb609132a71f52329d380944ea58bbc23ead99d254\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1\"" Oct 27 16:21:37.254114 containerd[1621]: time="2025-10-27T16:21:37.254064731Z" level=info msg="StartContainer for \"543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1\"" Oct 27 16:21:37.255964 containerd[1621]: time="2025-10-27T16:21:37.255933814Z" level=info msg="connecting to shim 543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1" address="unix:///run/containerd/s/d0e9c11b116869f3646262f241efa6155e2b315fd2e6daba1538362776d3035f" protocol=ttrpc version=3 Oct 27 16:21:37.277296 systemd[1]: Started cri-containerd-543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1.scope - libcontainer container 543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1. Oct 27 16:21:37.322903 containerd[1621]: time="2025-10-27T16:21:37.322850754Z" level=info msg="StartContainer for \"543999cade1b42c398eb1d8caaa14aa9f07a3a7dbecd32a70d919e62a8f384a1\" returns successfully" Oct 27 16:21:37.401492 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 16:21:37.402649 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 16:21:37.594263 kubelet[2764]: I1027 16:21:37.593857 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2092531b-55e7-4501-8eb9-15912d303128-whisker-ca-bundle\") pod \"2092531b-55e7-4501-8eb9-15912d303128\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " Oct 27 16:21:37.594263 kubelet[2764]: I1027 16:21:37.593909 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2092531b-55e7-4501-8eb9-15912d303128-whisker-backend-key-pair\") pod \"2092531b-55e7-4501-8eb9-15912d303128\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " Oct 27 16:21:37.594263 kubelet[2764]: I1027 16:21:37.593927 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8xd2\" (UniqueName: \"kubernetes.io/projected/2092531b-55e7-4501-8eb9-15912d303128-kube-api-access-g8xd2\") pod \"2092531b-55e7-4501-8eb9-15912d303128\" (UID: \"2092531b-55e7-4501-8eb9-15912d303128\") " Oct 27 16:21:37.595237 kubelet[2764]: I1027 16:21:37.594481 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2092531b-55e7-4501-8eb9-15912d303128-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2092531b-55e7-4501-8eb9-15912d303128" (UID: "2092531b-55e7-4501-8eb9-15912d303128"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 16:21:37.599484 kubelet[2764]: I1027 16:21:37.599434 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2092531b-55e7-4501-8eb9-15912d303128-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2092531b-55e7-4501-8eb9-15912d303128" (UID: "2092531b-55e7-4501-8eb9-15912d303128"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 16:21:37.600294 kubelet[2764]: I1027 16:21:37.600269 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2092531b-55e7-4501-8eb9-15912d303128-kube-api-access-g8xd2" (OuterVolumeSpecName: "kube-api-access-g8xd2") pod "2092531b-55e7-4501-8eb9-15912d303128" (UID: "2092531b-55e7-4501-8eb9-15912d303128"). InnerVolumeSpecName "kube-api-access-g8xd2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 16:21:37.666981 systemd[1]: var-lib-kubelet-pods-2092531b\x2d55e7\x2d4501\x2d8eb9\x2d15912d303128-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8xd2.mount: Deactivated successfully. Oct 27 16:21:37.667123 systemd[1]: var-lib-kubelet-pods-2092531b\x2d55e7\x2d4501\x2d8eb9\x2d15912d303128-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 16:21:37.694771 kubelet[2764]: I1027 16:21:37.694661 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2092531b-55e7-4501-8eb9-15912d303128-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 16:21:37.694771 kubelet[2764]: I1027 16:21:37.694698 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2092531b-55e7-4501-8eb9-15912d303128-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 16:21:37.694771 kubelet[2764]: I1027 16:21:37.694707 2764 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8xd2\" (UniqueName: \"kubernetes.io/projected/2092531b-55e7-4501-8eb9-15912d303128-kube-api-access-g8xd2\") on node \"localhost\" DevicePath \"\"" Oct 27 16:21:37.915325 systemd[1]: Removed slice kubepods-besteffort-pod2092531b_55e7_4501_8eb9_15912d303128.slice - libcontainer container kubepods-besteffort-pod2092531b_55e7_4501_8eb9_15912d303128.slice. Oct 27 16:21:38.027694 kubelet[2764]: E1027 16:21:38.027486 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:38.043784 kubelet[2764]: I1027 16:21:38.043716 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vbqbv" podStartSLOduration=1.920306344 podStartE2EDuration="25.043688323s" podCreationTimestamp="2025-10-27 16:21:13 +0000 UTC" firstStartedPulling="2025-10-27 16:21:14.079796848 +0000 UTC m=+20.274128134" lastFinishedPulling="2025-10-27 16:21:37.203178837 +0000 UTC m=+43.397510113" observedRunningTime="2025-10-27 16:21:38.043502253 +0000 UTC m=+44.237833539" watchObservedRunningTime="2025-10-27 16:21:38.043688323 +0000 UTC m=+44.238019599" Oct 27 16:21:38.098368 systemd[1]: Created slice kubepods-besteffort-pod19c809b5_4ca8_41b2_8557_3eba1104bc4e.slice - libcontainer container kubepods-besteffort-pod19c809b5_4ca8_41b2_8557_3eba1104bc4e.slice. Oct 27 16:21:38.199046 kubelet[2764]: I1027 16:21:38.198994 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19c809b5-4ca8-41b2-8557-3eba1104bc4e-whisker-ca-bundle\") pod \"whisker-84cdc6b978-wfp8c\" (UID: \"19c809b5-4ca8-41b2-8557-3eba1104bc4e\") " pod="calico-system/whisker-84cdc6b978-wfp8c" Oct 27 16:21:38.199046 kubelet[2764]: I1027 16:21:38.199051 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19c809b5-4ca8-41b2-8557-3eba1104bc4e-whisker-backend-key-pair\") pod \"whisker-84cdc6b978-wfp8c\" (UID: \"19c809b5-4ca8-41b2-8557-3eba1104bc4e\") " pod="calico-system/whisker-84cdc6b978-wfp8c" Oct 27 16:21:38.199285 kubelet[2764]: I1027 16:21:38.199131 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt59n\" (UniqueName: \"kubernetes.io/projected/19c809b5-4ca8-41b2-8557-3eba1104bc4e-kube-api-access-zt59n\") pod \"whisker-84cdc6b978-wfp8c\" (UID: \"19c809b5-4ca8-41b2-8557-3eba1104bc4e\") " pod="calico-system/whisker-84cdc6b978-wfp8c" Oct 27 16:21:38.404767 containerd[1621]: time="2025-10-27T16:21:38.404717017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84cdc6b978-wfp8c,Uid:19c809b5-4ca8-41b2-8557-3eba1104bc4e,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:38.546121 systemd-networkd[1519]: cali0f82b4a99f3: Link UP Oct 27 16:21:38.546745 systemd-networkd[1519]: cali0f82b4a99f3: Gained carrier Oct 27 16:21:38.560295 containerd[1621]: 2025-10-27 16:21:38.426 [INFO][3972] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 16:21:38.560295 containerd[1621]: 2025-10-27 16:21:38.443 [INFO][3972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84cdc6b978--wfp8c-eth0 whisker-84cdc6b978- calico-system 19c809b5-4ca8-41b2-8557-3eba1104bc4e 924 0 2025-10-27 16:21:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84cdc6b978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84cdc6b978-wfp8c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0f82b4a99f3 [] [] }} ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-" Oct 27 16:21:38.560295 containerd[1621]: 2025-10-27 16:21:38.444 [INFO][3972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.560295 containerd[1621]: 2025-10-27 16:21:38.505 [INFO][3986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" HandleID="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Workload="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.506 [INFO][3986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" HandleID="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Workload="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fda0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84cdc6b978-wfp8c", "timestamp":"2025-10-27 16:21:38.505624729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.506 [INFO][3986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.506 [INFO][3986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.506 [INFO][3986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.513 [INFO][3986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" host="localhost" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.518 [INFO][3986] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.522 [INFO][3986] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.523 [INFO][3986] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.525 [INFO][3986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:38.560515 containerd[1621]: 2025-10-27 16:21:38.525 [INFO][3986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" host="localhost" Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.526 [INFO][3986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7 Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.529 [INFO][3986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" host="localhost" Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.534 [INFO][3986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" host="localhost" Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.534 [INFO][3986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" host="localhost" Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.534 [INFO][3986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:38.560780 containerd[1621]: 2025-10-27 16:21:38.534 [INFO][3986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" HandleID="k8s-pod-network.2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Workload="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.560909 containerd[1621]: 2025-10-27 16:21:38.537 [INFO][3972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84cdc6b978--wfp8c-eth0", GenerateName:"whisker-84cdc6b978-", Namespace:"calico-system", SelfLink:"", UID:"19c809b5-4ca8-41b2-8557-3eba1104bc4e", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84cdc6b978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84cdc6b978-wfp8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f82b4a99f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:38.560909 containerd[1621]: 2025-10-27 16:21:38.538 [INFO][3972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.560981 containerd[1621]: 2025-10-27 16:21:38.538 [INFO][3972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f82b4a99f3 ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.560981 containerd[1621]: 2025-10-27 16:21:38.547 [INFO][3972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.561152 containerd[1621]: 2025-10-27 16:21:38.548 [INFO][3972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84cdc6b978--wfp8c-eth0", GenerateName:"whisker-84cdc6b978-", Namespace:"calico-system", SelfLink:"", UID:"19c809b5-4ca8-41b2-8557-3eba1104bc4e", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84cdc6b978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7", Pod:"whisker-84cdc6b978-wfp8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0f82b4a99f3", MAC:"ba:64:bb:01:57:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:38.561264 containerd[1621]: 2025-10-27 16:21:38.555 [INFO][3972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" Namespace="calico-system" Pod="whisker-84cdc6b978-wfp8c" WorkloadEndpoint="localhost-k8s-whisker--84cdc6b978--wfp8c-eth0" Oct 27 16:21:38.755136 containerd[1621]: time="2025-10-27T16:21:38.753302793Z" level=info msg="connecting to shim 2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7" address="unix:///run/containerd/s/f4e57838e5bd46f09a1293b7fcc879e3efdec37cd873912d30f535ab44b3a97f" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:38.799989 systemd[1]: Started cri-containerd-2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7.scope - libcontainer container 2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7. Oct 27 16:21:38.828808 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:39.029332 kubelet[2764]: E1027 16:21:39.029295 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:39.182194 containerd[1621]: time="2025-10-27T16:21:39.182025882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84cdc6b978-wfp8c,Uid:19c809b5-4ca8-41b2-8557-3eba1104bc4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a30faee329f647dff1e93c5ba49bd4863d164e444c5cc26aa1bd691d812c8f7\"" Oct 27 16:21:39.183987 containerd[1621]: time="2025-10-27T16:21:39.183957932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 16:21:39.235189 systemd-networkd[1519]: vxlan.calico: Link UP Oct 27 16:21:39.235200 systemd-networkd[1519]: vxlan.calico: Gained carrier Oct 27 16:21:39.549798 containerd[1621]: time="2025-10-27T16:21:39.549740447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:39.551085 containerd[1621]: time="2025-10-27T16:21:39.551044807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 16:21:39.551217 containerd[1621]: time="2025-10-27T16:21:39.551089310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:39.551395 kubelet[2764]: E1027 16:21:39.551353 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:21:39.551463 kubelet[2764]: E1027 16:21:39.551422 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:21:39.551585 kubelet[2764]: E1027 16:21:39.551556 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:39.553098 containerd[1621]: time="2025-10-27T16:21:39.553059642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 16:21:39.908632 kubelet[2764]: I1027 16:21:39.908498 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2092531b-55e7-4501-8eb9-15912d303128" path="/var/lib/kubelet/pods/2092531b-55e7-4501-8eb9-15912d303128/volumes" Oct 27 16:21:39.927405 systemd-networkd[1519]: cali0f82b4a99f3: Gained IPv6LL Oct 27 16:21:40.078523 containerd[1621]: time="2025-10-27T16:21:40.078445632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:40.079782 containerd[1621]: time="2025-10-27T16:21:40.079738140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 16:21:40.079858 containerd[1621]: time="2025-10-27T16:21:40.079813562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:40.080086 kubelet[2764]: E1027 16:21:40.080024 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:21:40.080529 kubelet[2764]: E1027 16:21:40.080086 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:21:40.080529 kubelet[2764]: E1027 16:21:40.080221 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:40.080529 kubelet[2764]: E1027 16:21:40.080286 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:21:40.910433 containerd[1621]: time="2025-10-27T16:21:40.910379201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bbccbd4-s5p5q,Uid:1364f957-566a-4e9b-a994-8a554341484a,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:40.912337 containerd[1621]: time="2025-10-27T16:21:40.912288778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-vmzqg,Uid:5566825d-6dfc-4a28-b349-7aea4d744119,Namespace:calico-apiserver,Attempt:0,}" Oct 27 16:21:40.913458 containerd[1621]: time="2025-10-27T16:21:40.913426035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-jl28h,Uid:d5c04391-35e3-4b29-b954-2c4df3aa5299,Namespace:calico-apiserver,Attempt:0,}" Oct 27 16:21:41.053060 kubelet[2764]: E1027 16:21:41.052963 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:21:41.057549 systemd-networkd[1519]: cali72cff2b1949: Link UP Oct 27 16:21:41.057763 systemd-networkd[1519]: cali72cff2b1949: Gained carrier Oct 27 16:21:41.076546 containerd[1621]: 2025-10-27 16:21:40.976 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0 calico-apiserver-7967f997df- calico-apiserver d5c04391-35e3-4b29-b954-2c4df3aa5299 843 0 2025-10-27 16:21:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7967f997df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7967f997df-jl28h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72cff2b1949 [] [] }} ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-" Oct 27 16:21:41.076546 containerd[1621]: 2025-10-27 16:21:40.976 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.076546 containerd[1621]: 2025-10-27 16:21:41.008 [INFO][4327] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" HandleID="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Workload="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.008 [INFO][4327] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" HandleID="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Workload="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df0f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7967f997df-jl28h", "timestamp":"2025-10-27 16:21:41.00873368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.008 [INFO][4327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.008 [INFO][4327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.009 [INFO][4327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.015 [INFO][4327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" host="localhost" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.020 [INFO][4327] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.025 [INFO][4327] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.026 [INFO][4327] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.028 [INFO][4327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.076780 containerd[1621]: 2025-10-27 16:21:41.028 [INFO][4327] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" host="localhost" Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.029 [INFO][4327] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141 Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.033 [INFO][4327] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" host="localhost" Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.044 [INFO][4327] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" host="localhost" Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.044 [INFO][4327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" host="localhost" Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.044 [INFO][4327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:41.077190 containerd[1621]: 2025-10-27 16:21:41.046 [INFO][4327] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" HandleID="k8s-pod-network.6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Workload="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.077561 containerd[1621]: 2025-10-27 16:21:41.050 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0", GenerateName:"calico-apiserver-7967f997df-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5c04391-35e3-4b29-b954-2c4df3aa5299", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7967f997df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7967f997df-jl28h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cff2b1949", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.077634 containerd[1621]: 2025-10-27 16:21:41.050 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.077634 containerd[1621]: 2025-10-27 16:21:41.050 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72cff2b1949 ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.077634 containerd[1621]: 2025-10-27 16:21:41.056 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.077699 containerd[1621]: 2025-10-27 16:21:41.057 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0", GenerateName:"calico-apiserver-7967f997df-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5c04391-35e3-4b29-b954-2c4df3aa5299", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7967f997df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141", Pod:"calico-apiserver-7967f997df-jl28h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cff2b1949", MAC:"3e:64:ac:7f:8c:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.077755 containerd[1621]: 2025-10-27 16:21:41.069 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-jl28h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--jl28h-eth0" Oct 27 16:21:41.079808 systemd-networkd[1519]: vxlan.calico: Gained IPv6LL Oct 27 16:21:41.109207 containerd[1621]: time="2025-10-27T16:21:41.109051999Z" level=info msg="connecting to shim 6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141" address="unix:///run/containerd/s/0ebee45321b6b78a64477cabd99ee2b114b34037cbdfa60211852aa9d3eb0e21" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:41.145198 systemd[1]: Started cri-containerd-6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141.scope - libcontainer container 6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141. Oct 27 16:21:41.162394 systemd-networkd[1519]: cali04a2b053ea6: Link UP Oct 27 16:21:41.162658 systemd-networkd[1519]: cali04a2b053ea6: Gained carrier Oct 27 16:21:41.168654 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:41.182183 containerd[1621]: 2025-10-27 16:21:40.965 [INFO][4284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0 calico-apiserver-7967f997df- calico-apiserver 5566825d-6dfc-4a28-b349-7aea4d744119 844 0 2025-10-27 16:21:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7967f997df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7967f997df-vmzqg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali04a2b053ea6 [] [] }} ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-" Oct 27 16:21:41.182183 containerd[1621]: 2025-10-27 16:21:40.965 [INFO][4284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182183 containerd[1621]: 2025-10-27 16:21:41.008 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" HandleID="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Workload="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.009 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" HandleID="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Workload="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7967f997df-vmzqg", "timestamp":"2025-10-27 16:21:41.008779596 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.009 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.044 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.046 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.121 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" host="localhost" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.128 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.133 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.135 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.138 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.182393 containerd[1621]: 2025-10-27 16:21:41.138 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" host="localhost" Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.139 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.142 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" host="localhost" Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.150 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" host="localhost" Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.150 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" host="localhost" Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.151 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:41.182622 containerd[1621]: 2025-10-27 16:21:41.151 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" HandleID="k8s-pod-network.931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Workload="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182753 containerd[1621]: 2025-10-27 16:21:41.155 [INFO][4284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0", GenerateName:"calico-apiserver-7967f997df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5566825d-6dfc-4a28-b349-7aea4d744119", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7967f997df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7967f997df-vmzqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04a2b053ea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.182814 containerd[1621]: 2025-10-27 16:21:41.158 [INFO][4284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182814 containerd[1621]: 2025-10-27 16:21:41.158 [INFO][4284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04a2b053ea6 ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182814 containerd[1621]: 2025-10-27 16:21:41.164 [INFO][4284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.182881 containerd[1621]: 2025-10-27 16:21:41.165 [INFO][4284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0", GenerateName:"calico-apiserver-7967f997df-", Namespace:"calico-apiserver", SelfLink:"", UID:"5566825d-6dfc-4a28-b349-7aea4d744119", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7967f997df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e", Pod:"calico-apiserver-7967f997df-vmzqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04a2b053ea6", MAC:"ea:ea:b4:02:f2:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.182929 containerd[1621]: 2025-10-27 16:21:41.174 [INFO][4284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" Namespace="calico-apiserver" Pod="calico-apiserver-7967f997df-vmzqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7967f997df--vmzqg-eth0" Oct 27 16:21:41.209959 containerd[1621]: time="2025-10-27T16:21:41.209855971Z" level=info msg="connecting to shim 931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e" address="unix:///run/containerd/s/affef339eb9ed9d039ba86519d918de33cbd2575c85b6334178df308eaf2dc34" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:41.212524 containerd[1621]: time="2025-10-27T16:21:41.212482424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-jl28h,Uid:d5c04391-35e3-4b29-b954-2c4df3aa5299,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6dbe5bf26e268d5057adc58317c6c87fbabb0f0111138ff191df421eb2558141\"" Oct 27 16:21:41.214780 containerd[1621]: time="2025-10-27T16:21:41.214749002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:21:41.238339 systemd[1]: Started cri-containerd-931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e.scope - libcontainer container 931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e. Oct 27 16:21:41.255582 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:41.257944 systemd-networkd[1519]: cali29f097d8c76: Link UP Oct 27 16:21:41.258254 systemd-networkd[1519]: cali29f097d8c76: Gained carrier Oct 27 16:21:41.275603 containerd[1621]: 2025-10-27 16:21:40.968 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0 calico-kube-controllers-67bbccbd4- calico-system 1364f957-566a-4e9b-a994-8a554341484a 840 0 2025-10-27 16:21:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67bbccbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67bbccbd4-s5p5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali29f097d8c76 [] [] }} ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-" Oct 27 16:21:41.275603 containerd[1621]: 2025-10-27 16:21:40.968 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.275603 containerd[1621]: 2025-10-27 16:21:41.012 [INFO][4317] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" HandleID="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Workload="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.012 [INFO][4317] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" HandleID="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Workload="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67bbccbd4-s5p5q", "timestamp":"2025-10-27 16:21:41.01201828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.012 [INFO][4317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.151 [INFO][4317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.151 [INFO][4317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.218 [INFO][4317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" host="localhost" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.229 [INFO][4317] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.233 [INFO][4317] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.235 [INFO][4317] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.237 [INFO][4317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:41.275836 containerd[1621]: 2025-10-27 16:21:41.237 [INFO][4317] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" host="localhost" Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.238 [INFO][4317] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9 Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.245 [INFO][4317] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" host="localhost" Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.249 [INFO][4317] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" host="localhost" Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.249 [INFO][4317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" host="localhost" Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.249 [INFO][4317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:41.276055 containerd[1621]: 2025-10-27 16:21:41.249 [INFO][4317] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" HandleID="k8s-pod-network.6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Workload="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.276197 containerd[1621]: 2025-10-27 16:21:41.253 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0", GenerateName:"calico-kube-controllers-67bbccbd4-", Namespace:"calico-system", SelfLink:"", UID:"1364f957-566a-4e9b-a994-8a554341484a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67bbccbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67bbccbd4-s5p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29f097d8c76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.276255 containerd[1621]: 2025-10-27 16:21:41.253 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.276255 containerd[1621]: 2025-10-27 16:21:41.254 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29f097d8c76 ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.276255 containerd[1621]: 2025-10-27 16:21:41.256 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.276324 containerd[1621]: 2025-10-27 16:21:41.259 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0", GenerateName:"calico-kube-controllers-67bbccbd4-", Namespace:"calico-system", SelfLink:"", UID:"1364f957-566a-4e9b-a994-8a554341484a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67bbccbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9", Pod:"calico-kube-controllers-67bbccbd4-s5p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29f097d8c76", MAC:"66:4f:85:96:46:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:41.276376 containerd[1621]: 2025-10-27 16:21:41.269 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" Namespace="calico-system" Pod="calico-kube-controllers-67bbccbd4-s5p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bbccbd4--s5p5q-eth0" Oct 27 16:21:41.305452 containerd[1621]: time="2025-10-27T16:21:41.305402398Z" level=info msg="connecting to shim 6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9" address="unix:///run/containerd/s/eac046e73ec4a2fae00898f95700dc8a0614a1980c5102acd58c7f05d03f8ada" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:41.305617 containerd[1621]: time="2025-10-27T16:21:41.305588597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7967f997df-vmzqg,Uid:5566825d-6dfc-4a28-b349-7aea4d744119,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"931f6d1350ef82488d61e27516decebe590e07b0de42190ca60ba852e1ce5d3e\"" Oct 27 16:21:41.330338 systemd[1]: Started cri-containerd-6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9.scope - libcontainer container 6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9. Oct 27 16:21:41.345454 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:41.379304 containerd[1621]: time="2025-10-27T16:21:41.379259285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bbccbd4-s5p5q,Uid:1364f957-566a-4e9b-a994-8a554341484a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ec77a0c4739b2b2412debb8e2b3b2746cb67859e726e16e86a0dcebf554e4f9\"" Oct 27 16:21:41.658675 containerd[1621]: time="2025-10-27T16:21:41.658591970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:41.659959 containerd[1621]: time="2025-10-27T16:21:41.659905597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:21:41.660060 containerd[1621]: time="2025-10-27T16:21:41.660004973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:41.660292 kubelet[2764]: E1027 16:21:41.660245 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:41.660687 kubelet[2764]: E1027 16:21:41.660301 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:41.660687 kubelet[2764]: E1027 16:21:41.660622 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-jl28h_calico-apiserver(d5c04391-35e3-4b29-b954-2c4df3aa5299): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:41.660760 kubelet[2764]: E1027 16:21:41.660690 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:21:41.660801 containerd[1621]: time="2025-10-27T16:21:41.660706291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:21:41.908243 kubelet[2764]: E1027 16:21:41.908151 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:41.908786 containerd[1621]: time="2025-10-27T16:21:41.908656621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4ljl,Uid:8eb6d556-0299-408d-b34c-9332a4f8317d,Namespace:kube-system,Attempt:0,}" Oct 27 16:21:42.019871 systemd-networkd[1519]: cali95b9dbaae76: Link UP Oct 27 16:21:42.020845 systemd-networkd[1519]: cali95b9dbaae76: Gained carrier Oct 27 16:21:42.034305 containerd[1621]: 2025-10-27 16:21:41.949 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--g4ljl-eth0 coredns-66bc5c9577- kube-system 8eb6d556-0299-408d-b34c-9332a4f8317d 842 0 2025-10-27 16:21:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-g4ljl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95b9dbaae76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-" Oct 27 16:21:42.034305 containerd[1621]: 2025-10-27 16:21:41.950 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.034305 containerd[1621]: 2025-10-27 16:21:41.979 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" HandleID="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Workload="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.980 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" HandleID="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Workload="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-g4ljl", "timestamp":"2025-10-27 16:21:41.97995967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.980 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.980 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.980 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.989 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" host="localhost" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.993 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:41.997 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:42.000 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:42.002 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:42.034763 containerd[1621]: 2025-10-27 16:21:42.002 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" host="localhost" Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.003 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88 Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.007 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" host="localhost" Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.013 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" host="localhost" Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.013 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" host="localhost" Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.013 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:42.034986 containerd[1621]: 2025-10-27 16:21:42.013 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" HandleID="k8s-pod-network.4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Workload="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.016 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4ljl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eb6d556-0299-408d-b34c-9332a4f8317d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-g4ljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b9dbaae76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.016 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.016 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95b9dbaae76 ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.021 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.021 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4ljl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eb6d556-0299-408d-b34c-9332a4f8317d", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88", Pod:"coredns-66bc5c9577-g4ljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b9dbaae76", MAC:"52:f3:bd:3a:d0:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:42.035109 containerd[1621]: 2025-10-27 16:21:42.030 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" Namespace="kube-system" Pod="coredns-66bc5c9577-g4ljl" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4ljl-eth0" Oct 27 16:21:42.040423 kubelet[2764]: E1027 16:21:42.040376 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:21:42.045910 containerd[1621]: time="2025-10-27T16:21:42.045866887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:42.047714 containerd[1621]: time="2025-10-27T16:21:42.047576818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:21:42.047849 containerd[1621]: time="2025-10-27T16:21:42.047802532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:42.048071 kubelet[2764]: E1027 16:21:42.048014 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:42.048142 kubelet[2764]: E1027 16:21:42.048088 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:42.048499 kubelet[2764]: E1027 16:21:42.048439 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-vmzqg_calico-apiserver(5566825d-6dfc-4a28-b349-7aea4d744119): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:42.048571 kubelet[2764]: E1027 16:21:42.048514 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:21:42.049034 containerd[1621]: time="2025-10-27T16:21:42.048988680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 16:21:42.072764 containerd[1621]: time="2025-10-27T16:21:42.072698710Z" level=info msg="connecting to shim 4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88" address="unix:///run/containerd/s/d06e04951b6fa4f8d4a59d551d8dba66a26ef57d708203b1d4ae472e093c9243" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:42.101507 systemd[1]: Started cri-containerd-4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88.scope - libcontainer container 4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88. Oct 27 16:21:42.116413 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:42.149040 containerd[1621]: time="2025-10-27T16:21:42.148992991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4ljl,Uid:8eb6d556-0299-408d-b34c-9332a4f8317d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88\"" Oct 27 16:21:42.149801 kubelet[2764]: E1027 16:21:42.149774 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:42.154576 containerd[1621]: time="2025-10-27T16:21:42.154530121Z" level=info msg="CreateContainer within sandbox \"4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 16:21:42.168979 containerd[1621]: time="2025-10-27T16:21:42.168830289Z" level=info msg="Container 203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:42.172912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649063360.mount: Deactivated successfully. Oct 27 16:21:42.175829 containerd[1621]: time="2025-10-27T16:21:42.175791933Z" level=info msg="CreateContainer within sandbox \"4af6510d1931ba97bdce0e7e2bb7b74e012921a2f0adf9b4f96f57e81f3fba88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0\"" Oct 27 16:21:42.176452 containerd[1621]: time="2025-10-27T16:21:42.176387312Z" level=info msg="StartContainer for \"203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0\"" Oct 27 16:21:42.177520 containerd[1621]: time="2025-10-27T16:21:42.177493509Z" level=info msg="connecting to shim 203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0" address="unix:///run/containerd/s/d06e04951b6fa4f8d4a59d551d8dba66a26ef57d708203b1d4ae472e093c9243" protocol=ttrpc version=3 Oct 27 16:21:42.201314 systemd[1]: Started cri-containerd-203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0.scope - libcontainer container 203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0. Oct 27 16:21:42.236708 containerd[1621]: time="2025-10-27T16:21:42.236667034Z" level=info msg="StartContainer for \"203b7eccc5a6c71045a18ea66c867045ddde576bd8d0433fd8841aa598536ef0\" returns successfully" Oct 27 16:21:42.459846 containerd[1621]: time="2025-10-27T16:21:42.459791242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:42.460984 containerd[1621]: time="2025-10-27T16:21:42.460951180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 16:21:42.461040 containerd[1621]: time="2025-10-27T16:21:42.461031230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:42.461365 kubelet[2764]: E1027 16:21:42.461258 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:21:42.461365 kubelet[2764]: E1027 16:21:42.461363 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:21:42.461649 kubelet[2764]: E1027 16:21:42.461468 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67bbccbd4-s5p5q_calico-system(1364f957-566a-4e9b-a994-8a554341484a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:42.461649 kubelet[2764]: E1027 16:21:42.461502 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:21:42.679365 systemd-networkd[1519]: cali72cff2b1949: Gained IPv6LL Oct 27 16:21:42.999379 systemd-networkd[1519]: cali29f097d8c76: Gained IPv6LL Oct 27 16:21:43.049091 kubelet[2764]: E1027 16:21:43.049043 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:43.049787 kubelet[2764]: E1027 16:21:43.049698 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:21:43.049787 kubelet[2764]: E1027 16:21:43.049728 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:21:43.049906 kubelet[2764]: E1027 16:21:43.049855 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:21:43.063351 systemd-networkd[1519]: cali04a2b053ea6: Gained IPv6LL Oct 27 16:21:43.383366 systemd-networkd[1519]: cali95b9dbaae76: Gained IPv6LL Oct 27 16:21:43.989897 containerd[1621]: time="2025-10-27T16:21:43.989836998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z78l5,Uid:7dbd484d-304e-4f95-8ee6-738c940331ca,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:44.052008 kubelet[2764]: E1027 16:21:44.051673 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:44.095806 kubelet[2764]: I1027 16:21:44.095750 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4ljl" podStartSLOduration=43.095730427 podStartE2EDuration="43.095730427s" podCreationTimestamp="2025-10-27 16:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:21:43.092424114 +0000 UTC m=+49.286755400" watchObservedRunningTime="2025-10-27 16:21:44.095730427 +0000 UTC m=+50.290061713" Oct 27 16:21:44.304067 systemd-networkd[1519]: cali4541f352c6b: Link UP Oct 27 16:21:44.305050 systemd-networkd[1519]: cali4541f352c6b: Gained carrier Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.223 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--z78l5-eth0 goldmane-7c778bb748- calico-system 7dbd484d-304e-4f95-8ee6-738c940331ca 839 0 2025-10-27 16:21:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-z78l5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4541f352c6b [] [] }} ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.224 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.263 [INFO][4647] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" HandleID="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Workload="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.263 [INFO][4647] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" HandleID="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Workload="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000554aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-z78l5", "timestamp":"2025-10-27 16:21:44.263412299 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.263 [INFO][4647] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.263 [INFO][4647] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.263 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.270 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.276 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.280 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.282 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.285 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.285 [INFO][4647] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.286 [INFO][4647] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4 Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.290 [INFO][4647] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.297 [INFO][4647] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.297 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" host="localhost" Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.297 [INFO][4647] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:44.318877 containerd[1621]: 2025-10-27 16:21:44.297 [INFO][4647] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" HandleID="k8s-pod-network.62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Workload="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.301 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--z78l5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7dbd484d-304e-4f95-8ee6-738c940331ca", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-z78l5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4541f352c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.301 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.301 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4541f352c6b ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.303 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.305 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--z78l5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7dbd484d-304e-4f95-8ee6-738c940331ca", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4", Pod:"goldmane-7c778bb748-z78l5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4541f352c6b", MAC:"ae:e2:ef:ad:23:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:44.319472 containerd[1621]: 2025-10-27 16:21:44.313 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" Namespace="calico-system" Pod="goldmane-7c778bb748-z78l5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--z78l5-eth0" Oct 27 16:21:44.347759 containerd[1621]: time="2025-10-27T16:21:44.347691869Z" level=info msg="connecting to shim 62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4" address="unix:///run/containerd/s/7a42db028c220712672a0ebdf316d065f9eedf33a213108bb254ddd70195efa3" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:44.372320 systemd[1]: Started cri-containerd-62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4.scope - libcontainer container 62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4. Oct 27 16:21:44.389792 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:44.440630 containerd[1621]: time="2025-10-27T16:21:44.440580665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z78l5,Uid:7dbd484d-304e-4f95-8ee6-738c940331ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"62d1b3f88d21031db392b6ceba47c384839bda3608c85e192aadcc0c512cfcc4\"" Oct 27 16:21:44.442861 containerd[1621]: time="2025-10-27T16:21:44.442814610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 16:21:44.544779 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:45630.service - OpenSSH per-connection server daemon (10.0.0.1:45630). Oct 27 16:21:44.625228 sshd[4717]: Accepted publickey for core from 10.0.0.1 port 45630 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:21:44.627679 sshd-session[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:21:44.632800 systemd-logind[1595]: New session 8 of user core. Oct 27 16:21:44.638315 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 16:21:44.804892 sshd[4724]: Connection closed by 10.0.0.1 port 45630 Oct 27 16:21:44.805257 sshd-session[4717]: pam_unix(sshd:session): session closed for user core Oct 27 16:21:44.810112 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:45630.service: Deactivated successfully. Oct 27 16:21:44.812190 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 16:21:44.813099 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Oct 27 16:21:44.814423 systemd-logind[1595]: Removed session 8. Oct 27 16:21:44.827570 containerd[1621]: time="2025-10-27T16:21:44.827526250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:44.829089 containerd[1621]: time="2025-10-27T16:21:44.829049971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 16:21:44.829194 containerd[1621]: time="2025-10-27T16:21:44.829141122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:44.829381 kubelet[2764]: E1027 16:21:44.829334 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:21:44.829452 kubelet[2764]: E1027 16:21:44.829387 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:21:44.829576 kubelet[2764]: E1027 16:21:44.829524 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z78l5_calico-system(7dbd484d-304e-4f95-8ee6-738c940331ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:44.829617 kubelet[2764]: E1027 16:21:44.829571 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:21:44.908969 kubelet[2764]: E1027 16:21:44.908816 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:44.909561 containerd[1621]: time="2025-10-27T16:21:44.909325342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jjnzr,Uid:b94d81d5-ee9f-418b-a5a5-65a18ed654f2,Namespace:kube-system,Attempt:0,}" Oct 27 16:21:44.911296 containerd[1621]: time="2025-10-27T16:21:44.911252080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtl2m,Uid:efa395f4-63b7-48dd-900f-15414929351b,Namespace:calico-system,Attempt:0,}" Oct 27 16:21:45.029243 systemd-networkd[1519]: calie43bb1c29bf: Link UP Oct 27 16:21:45.029827 systemd-networkd[1519]: calie43bb1c29bf: Gained carrier Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.961 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wtl2m-eth0 csi-node-driver- calico-system efa395f4-63b7-48dd-900f-15414929351b 718 0 2025-10-27 16:21:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wtl2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie43bb1c29bf [] [] }} ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.961 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.989 [INFO][4776] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" HandleID="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Workload="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.989 [INFO][4776] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" HandleID="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Workload="localhost-k8s-csi--node--driver--wtl2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wtl2m", "timestamp":"2025-10-27 16:21:44.98911484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.989 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.989 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.989 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:44.996 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.000 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.005 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.007 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.009 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.009 [INFO][4776] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.011 [INFO][4776] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3 Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.016 [INFO][4776] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4776] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" host="localhost" Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:45.045390 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4776] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" HandleID="k8s-pod-network.6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Workload="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.026 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtl2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efa395f4-63b7-48dd-900f-15414929351b", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wtl2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie43bb1c29bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.027 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.027 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie43bb1c29bf ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.031 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.031 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtl2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efa395f4-63b7-48dd-900f-15414929351b", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3", Pod:"csi-node-driver-wtl2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie43bb1c29bf", MAC:"b2:ad:bd:cc:bb:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:45.046522 containerd[1621]: 2025-10-27 16:21:45.039 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" Namespace="calico-system" Pod="csi-node-driver-wtl2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtl2m-eth0" Oct 27 16:21:45.056339 kubelet[2764]: E1027 16:21:45.056306 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:21:45.056757 kubelet[2764]: E1027 16:21:45.056598 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:45.086206 containerd[1621]: time="2025-10-27T16:21:45.086132547Z" level=info msg="connecting to shim 6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3" address="unix:///run/containerd/s/70b5486ab3d99c37b26f792d9ba33d5e4363896a0edf4134a58be429697105c9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:45.115372 systemd[1]: Started cri-containerd-6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3.scope - libcontainer container 6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3. Oct 27 16:21:45.131427 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:45.136997 systemd-networkd[1519]: cali4ab8d3d0235: Link UP Oct 27 16:21:45.138117 systemd-networkd[1519]: cali4ab8d3d0235: Gained carrier Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:44.952 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--jjnzr-eth0 coredns-66bc5c9577- kube-system b94d81d5-ee9f-418b-a5a5-65a18ed654f2 841 0 2025-10-27 16:21:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-jjnzr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ab8d3d0235 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:44.954 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:44.993 [INFO][4770] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" HandleID="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Workload="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:44.994 [INFO][4770] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" HandleID="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Workload="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-jjnzr", "timestamp":"2025-10-27 16:21:44.993937687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:44.994 [INFO][4770] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4770] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.023 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.098 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.106 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.113 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.115 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.118 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.118 [INFO][4770] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.119 [INFO][4770] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7 Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.123 [INFO][4770] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.130 [INFO][4770] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.130 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" host="localhost" Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.131 [INFO][4770] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 16:21:45.159478 containerd[1621]: 2025-10-27 16:21:45.131 [INFO][4770] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" HandleID="k8s-pod-network.0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Workload="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.134 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jjnzr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b94d81d5-ee9f-418b-a5a5-65a18ed654f2", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-jjnzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ab8d3d0235", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.135 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.135 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ab8d3d0235 ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.138 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.139 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--jjnzr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b94d81d5-ee9f-418b-a5a5-65a18ed654f2", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 16, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7", Pod:"coredns-66bc5c9577-jjnzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ab8d3d0235", MAC:"1a:2d:6b:23:08:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 16:21:45.160505 containerd[1621]: 2025-10-27 16:21:45.154 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" Namespace="kube-system" Pod="coredns-66bc5c9577-jjnzr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--jjnzr-eth0" Oct 27 16:21:45.166593 containerd[1621]: time="2025-10-27T16:21:45.166479520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtl2m,Uid:efa395f4-63b7-48dd-900f-15414929351b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bd65f4143e1561d2b080fe3209ed4585672a0825af848014f81b704bbc49bd3\"" Oct 27 16:21:45.168418 containerd[1621]: time="2025-10-27T16:21:45.168390779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 16:21:45.197021 containerd[1621]: time="2025-10-27T16:21:45.196949431Z" level=info msg="connecting to shim 0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7" address="unix:///run/containerd/s/4bbe4ffa4ae478adfec5da19b7ef350f7692164176a04115ea4f4e8492161169" namespace=k8s.io protocol=ttrpc version=3 Oct 27 16:21:45.226339 systemd[1]: Started cri-containerd-0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7.scope - libcontainer container 0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7. Oct 27 16:21:45.245029 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 16:21:45.288473 containerd[1621]: time="2025-10-27T16:21:45.288385677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jjnzr,Uid:b94d81d5-ee9f-418b-a5a5-65a18ed654f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7\"" Oct 27 16:21:45.300918 kubelet[2764]: E1027 16:21:45.300871 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:45.309182 containerd[1621]: time="2025-10-27T16:21:45.307963727Z" level=info msg="CreateContainer within sandbox \"0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 16:21:45.322940 containerd[1621]: time="2025-10-27T16:21:45.322889613Z" level=info msg="Container 810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2: CDI devices from CRI Config.CDIDevices: []" Oct 27 16:21:45.330621 containerd[1621]: time="2025-10-27T16:21:45.330591273Z" level=info msg="CreateContainer within sandbox \"0b948552f98751b98c088dc45bff72e89eda420bac2997280d5d3d1ec3e64fa7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2\"" Oct 27 16:21:45.331771 containerd[1621]: time="2025-10-27T16:21:45.331731745Z" level=info msg="StartContainer for \"810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2\"" Oct 27 16:21:45.336651 containerd[1621]: time="2025-10-27T16:21:45.336555823Z" level=info msg="connecting to shim 810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2" address="unix:///run/containerd/s/4bbe4ffa4ae478adfec5da19b7ef350f7692164176a04115ea4f4e8492161169" protocol=ttrpc version=3 Oct 27 16:21:45.363360 systemd[1]: Started cri-containerd-810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2.scope - libcontainer container 810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2. Oct 27 16:21:45.369198 systemd-networkd[1519]: cali4541f352c6b: Gained IPv6LL Oct 27 16:21:45.400505 containerd[1621]: time="2025-10-27T16:21:45.400458248Z" level=info msg="StartContainer for \"810e64aaf6fa1e910eac12968fce115a8b2d1a720c0bc2ee7083d0dca679e9d2\" returns successfully" Oct 27 16:21:45.529468 containerd[1621]: time="2025-10-27T16:21:45.529411006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:45.530669 containerd[1621]: time="2025-10-27T16:21:45.530610068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 16:21:45.530848 containerd[1621]: time="2025-10-27T16:21:45.530675140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:45.530921 kubelet[2764]: E1027 16:21:45.530868 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:21:45.530921 kubelet[2764]: E1027 16:21:45.530925 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:21:45.531109 kubelet[2764]: E1027 16:21:45.531017 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:45.531715 containerd[1621]: time="2025-10-27T16:21:45.531683102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 16:21:45.898011 containerd[1621]: time="2025-10-27T16:21:45.897857016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:45.937458 containerd[1621]: time="2025-10-27T16:21:45.937395877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 16:21:45.937570 containerd[1621]: time="2025-10-27T16:21:45.937492588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:45.937658 kubelet[2764]: E1027 16:21:45.937617 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:21:45.937705 kubelet[2764]: E1027 16:21:45.937665 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:21:45.937767 kubelet[2764]: E1027 16:21:45.937744 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:45.937826 kubelet[2764]: E1027 16:21:45.937787 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:46.060150 kubelet[2764]: E1027 16:21:46.060066 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:46.062831 kubelet[2764]: E1027 16:21:46.062624 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:21:46.064327 kubelet[2764]: E1027 16:21:46.064140 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:46.072099 kubelet[2764]: I1027 16:21:46.072033 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jjnzr" podStartSLOduration=45.072015314 podStartE2EDuration="45.072015314s" podCreationTimestamp="2025-10-27 16:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 16:21:46.071054059 +0000 UTC m=+52.265385355" watchObservedRunningTime="2025-10-27 16:21:46.072015314 +0000 UTC m=+52.266346600" Oct 27 16:21:46.200386 systemd-networkd[1519]: calie43bb1c29bf: Gained IPv6LL Oct 27 16:21:46.903347 systemd-networkd[1519]: cali4ab8d3d0235: Gained IPv6LL Oct 27 16:21:47.065045 kubelet[2764]: E1027 16:21:47.064643 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:47.068373 kubelet[2764]: E1027 16:21:47.068323 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:21:48.066765 kubelet[2764]: E1027 16:21:48.066717 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:21:49.822903 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:45642.service - OpenSSH per-connection server daemon (10.0.0.1:45642). Oct 27 16:21:49.899883 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 45642 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:21:49.901687 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:21:49.907037 systemd-logind[1595]: New session 9 of user core. Oct 27 16:21:49.914298 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 16:21:50.102878 sshd[4949]: Connection closed by 10.0.0.1 port 45642 Oct 27 16:21:50.105066 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Oct 27 16:21:50.110491 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:45642.service: Deactivated successfully. Oct 27 16:21:50.112649 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 16:21:50.113505 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Oct 27 16:21:50.114828 systemd-logind[1595]: Removed session 9. Oct 27 16:21:53.909681 containerd[1621]: time="2025-10-27T16:21:53.909179129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 16:21:54.345390 containerd[1621]: time="2025-10-27T16:21:54.345316707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:54.346655 containerd[1621]: time="2025-10-27T16:21:54.346597391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 16:21:54.346726 containerd[1621]: time="2025-10-27T16:21:54.346636935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:54.346970 kubelet[2764]: E1027 16:21:54.346918 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:21:54.347413 kubelet[2764]: E1027 16:21:54.346977 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:21:54.347413 kubelet[2764]: E1027 16:21:54.347086 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:54.348022 containerd[1621]: time="2025-10-27T16:21:54.347979625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 16:21:54.710178 containerd[1621]: time="2025-10-27T16:21:54.710108229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:54.750389 containerd[1621]: time="2025-10-27T16:21:54.750308790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 16:21:54.750563 containerd[1621]: time="2025-10-27T16:21:54.750341852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:54.750654 kubelet[2764]: E1027 16:21:54.750606 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:21:54.750722 kubelet[2764]: E1027 16:21:54.750665 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:21:54.750873 kubelet[2764]: E1027 16:21:54.750839 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:54.750917 kubelet[2764]: E1027 16:21:54.750896 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:21:54.906623 containerd[1621]: time="2025-10-27T16:21:54.906577249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:21:55.116716 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:36192.service - OpenSSH per-connection server daemon (10.0.0.1:36192). Oct 27 16:21:55.184235 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 36192 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:21:55.185943 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:21:55.190491 systemd-logind[1595]: New session 10 of user core. Oct 27 16:21:55.204298 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 16:21:55.302699 containerd[1621]: time="2025-10-27T16:21:55.302639099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:55.337193 containerd[1621]: time="2025-10-27T16:21:55.337122168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:55.337359 containerd[1621]: time="2025-10-27T16:21:55.337213029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:21:55.337554 kubelet[2764]: E1027 16:21:55.337511 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:55.337629 kubelet[2764]: E1027 16:21:55.337568 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:55.337710 kubelet[2764]: E1027 16:21:55.337661 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-jl28h_calico-apiserver(d5c04391-35e3-4b29-b954-2c4df3aa5299): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:55.337710 kubelet[2764]: E1027 16:21:55.337701 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:21:55.441910 sshd[4971]: Connection closed by 10.0.0.1 port 36192 Oct 27 16:21:55.442312 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Oct 27 16:21:55.446709 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:36192.service: Deactivated successfully. Oct 27 16:21:55.449042 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 16:21:55.450777 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Oct 27 16:21:55.452119 systemd-logind[1595]: Removed session 10. Oct 27 16:21:55.906869 containerd[1621]: time="2025-10-27T16:21:55.906732985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:21:56.261645 containerd[1621]: time="2025-10-27T16:21:56.261576550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:56.333556 containerd[1621]: time="2025-10-27T16:21:56.333479204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:56.333556 containerd[1621]: time="2025-10-27T16:21:56.333535039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:21:56.334008 kubelet[2764]: E1027 16:21:56.333837 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:56.334008 kubelet[2764]: E1027 16:21:56.333918 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:21:56.334315 kubelet[2764]: E1027 16:21:56.334029 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-vmzqg_calico-apiserver(5566825d-6dfc-4a28-b349-7aea4d744119): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:56.334315 kubelet[2764]: E1027 16:21:56.334068 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:21:56.906194 containerd[1621]: time="2025-10-27T16:21:56.906126837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 16:21:57.270048 containerd[1621]: time="2025-10-27T16:21:57.269982685Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:57.271303 containerd[1621]: time="2025-10-27T16:21:57.271249573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 16:21:57.271422 containerd[1621]: time="2025-10-27T16:21:57.271325025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:57.271576 kubelet[2764]: E1027 16:21:57.271538 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:21:57.271631 kubelet[2764]: E1027 16:21:57.271590 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:21:57.271686 kubelet[2764]: E1027 16:21:57.271672 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67bbccbd4-s5p5q_calico-system(1364f957-566a-4e9b-a994-8a554341484a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:57.271738 kubelet[2764]: E1027 16:21:57.271703 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:21:58.907118 containerd[1621]: time="2025-10-27T16:21:58.906721121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 16:21:59.268423 containerd[1621]: time="2025-10-27T16:21:59.268351377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:59.269859 containerd[1621]: time="2025-10-27T16:21:59.269745703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 16:21:59.269859 containerd[1621]: time="2025-10-27T16:21:59.269799284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:59.270100 kubelet[2764]: E1027 16:21:59.270034 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:21:59.270100 kubelet[2764]: E1027 16:21:59.270095 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:21:59.270536 kubelet[2764]: E1027 16:21:59.270328 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:59.270647 containerd[1621]: time="2025-10-27T16:21:59.270629692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 16:21:59.668025 containerd[1621]: time="2025-10-27T16:21:59.667863385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:21:59.669371 containerd[1621]: time="2025-10-27T16:21:59.669336519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 16:21:59.669532 containerd[1621]: time="2025-10-27T16:21:59.669414886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 27 16:21:59.669631 kubelet[2764]: E1027 16:21:59.669580 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:21:59.669680 kubelet[2764]: E1027 16:21:59.669635 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:21:59.669941 kubelet[2764]: E1027 16:21:59.669878 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z78l5_calico-system(7dbd484d-304e-4f95-8ee6-738c940331ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 16:21:59.669941 kubelet[2764]: E1027 16:21:59.669937 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:21:59.670208 containerd[1621]: time="2025-10-27T16:21:59.670004412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 16:22:00.024333 containerd[1621]: time="2025-10-27T16:22:00.024271997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:00.099017 containerd[1621]: time="2025-10-27T16:22:00.098934982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 16:22:00.099177 containerd[1621]: time="2025-10-27T16:22:00.098991218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:00.099246 kubelet[2764]: E1027 16:22:00.099202 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:22:00.099317 kubelet[2764]: E1027 16:22:00.099248 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:22:00.099363 kubelet[2764]: E1027 16:22:00.099341 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:00.099435 kubelet[2764]: E1027 16:22:00.099393 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:22:00.458487 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:60240.service - OpenSSH per-connection server daemon (10.0.0.1:60240). Oct 27 16:22:00.521807 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 60240 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:00.523567 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:00.528635 systemd-logind[1595]: New session 11 of user core. Oct 27 16:22:00.538327 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 16:22:00.696382 sshd[4997]: Connection closed by 10.0.0.1 port 60240 Oct 27 16:22:00.696849 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:00.710902 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:60240.service: Deactivated successfully. Oct 27 16:22:00.712968 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 16:22:00.713880 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Oct 27 16:22:00.717298 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:60248.service - OpenSSH per-connection server daemon (10.0.0.1:60248). Oct 27 16:22:00.718145 systemd-logind[1595]: Removed session 11. Oct 27 16:22:00.778983 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 60248 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:00.780690 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:00.785752 systemd-logind[1595]: New session 12 of user core. Oct 27 16:22:00.796336 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 16:22:00.963319 sshd[5014]: Connection closed by 10.0.0.1 port 60248 Oct 27 16:22:00.964653 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:00.976646 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:60256.service - OpenSSH per-connection server daemon (10.0.0.1:60256). Oct 27 16:22:00.978018 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:60248.service: Deactivated successfully. Oct 27 16:22:00.981883 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 16:22:00.982915 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Oct 27 16:22:00.986001 systemd-logind[1595]: Removed session 12. Oct 27 16:22:01.045037 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 60256 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:01.046618 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:01.053361 systemd-logind[1595]: New session 13 of user core. Oct 27 16:22:01.072371 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 16:22:01.190701 sshd[5029]: Connection closed by 10.0.0.1 port 60256 Oct 27 16:22:01.191097 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:01.196858 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:60256.service: Deactivated successfully. Oct 27 16:22:01.199015 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 16:22:01.199985 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Oct 27 16:22:01.201548 systemd-logind[1595]: Removed session 13. Oct 27 16:22:06.211743 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:60268.service - OpenSSH per-connection server daemon (10.0.0.1:60268). Oct 27 16:22:06.267876 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 60268 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:06.269244 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:06.273669 systemd-logind[1595]: New session 14 of user core. Oct 27 16:22:06.280308 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 16:22:06.396511 sshd[5053]: Connection closed by 10.0.0.1 port 60268 Oct 27 16:22:06.396892 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:06.401211 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:60268.service: Deactivated successfully. Oct 27 16:22:06.403467 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 16:22:06.404249 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Oct 27 16:22:06.405727 systemd-logind[1595]: Removed session 14. Oct 27 16:22:06.906469 kubelet[2764]: E1027 16:22:06.906410 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:22:07.905723 kubelet[2764]: E1027 16:22:07.905682 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:22:09.906360 kubelet[2764]: E1027 16:22:09.906292 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:22:10.906576 kubelet[2764]: E1027 16:22:10.906525 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:22:10.906576 kubelet[2764]: E1027 16:22:10.906525 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:22:10.908536 kubelet[2764]: E1027 16:22:10.908484 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:22:11.412704 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:57862.service - OpenSSH per-connection server daemon (10.0.0.1:57862). Oct 27 16:22:11.461664 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 57862 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:11.462938 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:11.466986 systemd-logind[1595]: New session 15 of user core. Oct 27 16:22:11.475288 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 16:22:11.594833 sshd[5095]: Connection closed by 10.0.0.1 port 57862 Oct 27 16:22:11.595124 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:11.600518 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:57862.service: Deactivated successfully. Oct 27 16:22:11.602678 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 16:22:11.603520 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Oct 27 16:22:11.604625 systemd-logind[1595]: Removed session 15. Oct 27 16:22:13.907383 kubelet[2764]: E1027 16:22:13.907322 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:22:16.607170 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:57872.service - OpenSSH per-connection server daemon (10.0.0.1:57872). Oct 27 16:22:16.690559 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 57872 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:16.692398 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:16.697633 systemd-logind[1595]: New session 16 of user core. Oct 27 16:22:16.707298 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 16:22:16.848722 sshd[5111]: Connection closed by 10.0.0.1 port 57872 Oct 27 16:22:16.851074 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:16.854964 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:57872.service: Deactivated successfully. Oct 27 16:22:16.857878 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 16:22:16.861092 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Oct 27 16:22:16.862785 systemd-logind[1595]: Removed session 16. Oct 27 16:22:19.907925 containerd[1621]: time="2025-10-27T16:22:19.907882633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:22:20.240279 containerd[1621]: time="2025-10-27T16:22:20.240224777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:20.241711 containerd[1621]: time="2025-10-27T16:22:20.241674562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:22:20.241810 containerd[1621]: time="2025-10-27T16:22:20.241741429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:20.241964 kubelet[2764]: E1027 16:22:20.241907 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:22:20.241964 kubelet[2764]: E1027 16:22:20.241972 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:22:20.242468 kubelet[2764]: E1027 16:22:20.242091 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-jl28h_calico-apiserver(d5c04391-35e3-4b29-b954-2c4df3aa5299): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:20.242468 kubelet[2764]: E1027 16:22:20.242128 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:22:20.906778 containerd[1621]: time="2025-10-27T16:22:20.906710082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 16:22:21.292863 containerd[1621]: time="2025-10-27T16:22:21.292810175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:21.293939 containerd[1621]: time="2025-10-27T16:22:21.293897856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 16:22:21.294005 containerd[1621]: time="2025-10-27T16:22:21.293970545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:21.294404 kubelet[2764]: E1027 16:22:21.294108 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:22:21.294404 kubelet[2764]: E1027 16:22:21.294188 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 16:22:21.294404 kubelet[2764]: E1027 16:22:21.294264 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:21.295302 containerd[1621]: time="2025-10-27T16:22:21.295266454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 16:22:21.681244 containerd[1621]: time="2025-10-27T16:22:21.681050822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:21.682282 containerd[1621]: time="2025-10-27T16:22:21.682232301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 16:22:21.682382 containerd[1621]: time="2025-10-27T16:22:21.682332533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:21.682573 kubelet[2764]: E1027 16:22:21.682515 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:22:21.682635 kubelet[2764]: E1027 16:22:21.682584 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 16:22:21.682701 kubelet[2764]: E1027 16:22:21.682680 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84cdc6b978-wfp8c_calico-system(19c809b5-4ca8-41b2-8557-3eba1104bc4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:21.682756 kubelet[2764]: E1027 16:22:21.682722 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:22:21.862034 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:60514.service - OpenSSH per-connection server daemon (10.0.0.1:60514). Oct 27 16:22:21.908352 containerd[1621]: time="2025-10-27T16:22:21.908291571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 16:22:21.932283 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 60514 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:21.933882 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:21.938566 systemd-logind[1595]: New session 17 of user core. Oct 27 16:22:21.949338 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 16:22:22.079576 sshd[5136]: Connection closed by 10.0.0.1 port 60514 Oct 27 16:22:22.079931 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:22.093109 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:60514.service: Deactivated successfully. Oct 27 16:22:22.095347 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 16:22:22.096351 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Oct 27 16:22:22.099819 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:60528.service - OpenSSH per-connection server daemon (10.0.0.1:60528). Oct 27 16:22:22.101050 systemd-logind[1595]: Removed session 17. Oct 27 16:22:22.161761 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 60528 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:22.163477 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:22.169198 systemd-logind[1595]: New session 18 of user core. Oct 27 16:22:22.176375 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 16:22:22.291047 containerd[1621]: time="2025-10-27T16:22:22.290798426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:22.292369 containerd[1621]: time="2025-10-27T16:22:22.292313101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 16:22:22.292556 containerd[1621]: time="2025-10-27T16:22:22.292384708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:22.292598 kubelet[2764]: E1027 16:22:22.292549 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:22:22.292640 kubelet[2764]: E1027 16:22:22.292610 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 16:22:22.293142 kubelet[2764]: E1027 16:22:22.292787 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-67bbccbd4-s5p5q_calico-system(1364f957-566a-4e9b-a994-8a554341484a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:22.293142 kubelet[2764]: E1027 16:22:22.292838 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:22:22.293301 containerd[1621]: time="2025-10-27T16:22:22.293000605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 16:22:22.484919 sshd[5152]: Connection closed by 10.0.0.1 port 60528 Oct 27 16:22:22.485250 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:22.493947 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:60528.service: Deactivated successfully. Oct 27 16:22:22.496534 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 16:22:22.497505 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Oct 27 16:22:22.500192 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:60542.service - OpenSSH per-connection server daemon (10.0.0.1:60542). Oct 27 16:22:22.500876 systemd-logind[1595]: Removed session 18. Oct 27 16:22:22.555827 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:22.557021 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:22.561533 systemd-logind[1595]: New session 19 of user core. Oct 27 16:22:22.571304 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 16:22:22.642020 containerd[1621]: time="2025-10-27T16:22:22.641967409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:22.645953 containerd[1621]: time="2025-10-27T16:22:22.645916630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 16:22:22.646018 containerd[1621]: time="2025-10-27T16:22:22.645996853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:22.646224 kubelet[2764]: E1027 16:22:22.646184 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:22:22.646546 kubelet[2764]: E1027 16:22:22.646234 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 16:22:22.646546 kubelet[2764]: E1027 16:22:22.646319 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7967f997df-vmzqg_calico-apiserver(5566825d-6dfc-4a28-b349-7aea4d744119): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:22.646546 kubelet[2764]: E1027 16:22:22.646368 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:22:22.906222 kubelet[2764]: E1027 16:22:22.906070 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:22:23.038049 sshd[5166]: Connection closed by 10.0.0.1 port 60542 Oct 27 16:22:23.039464 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:23.050230 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:60542.service: Deactivated successfully. Oct 27 16:22:23.053051 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 16:22:23.055689 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Oct 27 16:22:23.059210 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). Oct 27 16:22:23.060237 systemd-logind[1595]: Removed session 19. Oct 27 16:22:23.125228 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:23.127136 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:23.131867 systemd-logind[1595]: New session 20 of user core. Oct 27 16:22:23.141305 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 16:22:23.389137 sshd[5186]: Connection closed by 10.0.0.1 port 60548 Oct 27 16:22:23.390347 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:23.400205 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:60548.service: Deactivated successfully. Oct 27 16:22:23.402105 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 16:22:23.403085 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Oct 27 16:22:23.406694 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). Oct 27 16:22:23.407375 systemd-logind[1595]: Removed session 20. Oct 27 16:22:23.463288 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:23.465059 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:23.469644 systemd-logind[1595]: New session 21 of user core. Oct 27 16:22:23.478289 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 16:22:23.592409 sshd[5201]: Connection closed by 10.0.0.1 port 60556 Oct 27 16:22:23.592747 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:23.597504 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:60556.service: Deactivated successfully. Oct 27 16:22:23.599656 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 16:22:23.600566 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Oct 27 16:22:23.601772 systemd-logind[1595]: Removed session 21. Oct 27 16:22:24.906577 containerd[1621]: time="2025-10-27T16:22:24.906506463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 16:22:25.466436 containerd[1621]: time="2025-10-27T16:22:25.466355646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:25.467640 containerd[1621]: time="2025-10-27T16:22:25.467575996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 16:22:25.467703 containerd[1621]: time="2025-10-27T16:22:25.467616964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:25.467861 kubelet[2764]: E1027 16:22:25.467817 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:22:25.468259 kubelet[2764]: E1027 16:22:25.467872 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 16:22:25.468259 kubelet[2764]: E1027 16:22:25.467958 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z78l5_calico-system(7dbd484d-304e-4f95-8ee6-738c940331ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:25.468259 kubelet[2764]: E1027 16:22:25.467992 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:22:25.906821 containerd[1621]: time="2025-10-27T16:22:25.906674546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 16:22:26.344229 containerd[1621]: time="2025-10-27T16:22:26.344153659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:26.345486 containerd[1621]: time="2025-10-27T16:22:26.345453450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 16:22:26.345557 containerd[1621]: time="2025-10-27T16:22:26.345529636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:26.345737 kubelet[2764]: E1027 16:22:26.345691 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:22:26.345793 kubelet[2764]: E1027 16:22:26.345743 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 16:22:26.345851 kubelet[2764]: E1027 16:22:26.345830 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:26.346740 containerd[1621]: time="2025-10-27T16:22:26.346700129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 16:22:26.758420 containerd[1621]: time="2025-10-27T16:22:26.758356975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 16:22:26.759541 containerd[1621]: time="2025-10-27T16:22:26.759508362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 16:22:26.759617 containerd[1621]: time="2025-10-27T16:22:26.759587113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 27 16:22:26.759834 kubelet[2764]: E1027 16:22:26.759774 2764 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:22:26.760143 kubelet[2764]: E1027 16:22:26.759841 2764 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 16:22:26.760143 kubelet[2764]: E1027 16:22:26.759948 2764 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wtl2m_calico-system(efa395f4-63b7-48dd-900f-15414929351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 16:22:26.760143 kubelet[2764]: E1027 16:22:26.760003 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:22:27.905248 kubelet[2764]: E1027 16:22:27.905206 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:22:28.608989 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). Oct 27 16:22:28.668566 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:28.669802 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:28.674424 systemd-logind[1595]: New session 22 of user core. Oct 27 16:22:28.686311 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 16:22:28.805923 sshd[5223]: Connection closed by 10.0.0.1 port 60558 Oct 27 16:22:28.806254 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:28.810805 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:60558.service: Deactivated successfully. Oct 27 16:22:28.812812 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 16:22:28.813563 systemd-logind[1595]: Session 22 logged out. Waiting for processes to exit. Oct 27 16:22:28.814544 systemd-logind[1595]: Removed session 22. Oct 27 16:22:33.822725 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:41642.service - OpenSSH per-connection server daemon (10.0.0.1:41642). Oct 27 16:22:33.886935 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 41642 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:33.888326 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:33.893048 systemd-logind[1595]: New session 23 of user core. Oct 27 16:22:33.901296 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 16:22:33.907066 kubelet[2764]: E1027 16:22:33.906705 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-jl28h" podUID="d5c04391-35e3-4b29-b954-2c4df3aa5299" Oct 27 16:22:34.014114 sshd[5242]: Connection closed by 10.0.0.1 port 41642 Oct 27 16:22:34.014531 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:34.020435 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:41642.service: Deactivated successfully. Oct 27 16:22:34.022519 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 16:22:34.023470 systemd-logind[1595]: Session 23 logged out. Waiting for processes to exit. Oct 27 16:22:34.024670 systemd-logind[1595]: Removed session 23. Oct 27 16:22:34.906331 kubelet[2764]: E1027 16:22:34.906215 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:22:34.906585 kubelet[2764]: E1027 16:22:34.906512 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67bbccbd4-s5p5q" podUID="1364f957-566a-4e9b-a994-8a554341484a" Oct 27 16:22:36.906575 kubelet[2764]: E1027 16:22:36.906493 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7967f997df-vmzqg" podUID="5566825d-6dfc-4a28-b349-7aea4d744119" Oct 27 16:22:36.908180 kubelet[2764]: E1027 16:22:36.907749 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cdc6b978-wfp8c" podUID="19c809b5-4ca8-41b2-8557-3eba1104bc4e" Oct 27 16:22:38.906760 kubelet[2764]: E1027 16:22:38.906623 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z78l5" podUID="7dbd484d-304e-4f95-8ee6-738c940331ca" Oct 27 16:22:38.907490 kubelet[2764]: E1027 16:22:38.907400 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtl2m" podUID="efa395f4-63b7-48dd-900f-15414929351b" Oct 27 16:22:39.028136 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:41648.service - OpenSSH per-connection server daemon (10.0.0.1:41648). Oct 27 16:22:39.098669 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 41648 ssh2: RSA SHA256:uZ2SsweoLt391JAi4nRlA6QBN0bR66RV/4rcYh4vsoI Oct 27 16:22:39.100976 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 16:22:39.106219 systemd-logind[1595]: New session 24 of user core. Oct 27 16:22:39.110448 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 16:22:39.112192 kubelet[2764]: E1027 16:22:39.111143 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 16:22:39.237000 sshd[5285]: Connection closed by 10.0.0.1 port 41648 Oct 27 16:22:39.237309 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Oct 27 16:22:39.241906 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:41648.service: Deactivated successfully. Oct 27 16:22:39.244034 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 16:22:39.244888 systemd-logind[1595]: Session 24 logged out. Waiting for processes to exit. Oct 27 16:22:39.246131 systemd-logind[1595]: Removed session 24. Oct 27 16:22:40.905800 kubelet[2764]: E1027 16:22:40.905750 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"