Oct 28 13:20:13.598455 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 28 11:22:35 -00 2025 Oct 28 13:20:13.598478 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:13.598490 kernel: BIOS-provided physical RAM map: Oct 28 13:20:13.598497 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 28 13:20:13.598504 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 28 13:20:13.598510 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 28 13:20:13.598518 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 28 13:20:13.598525 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 28 13:20:13.598535 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 28 13:20:13.598557 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 28 13:20:13.598564 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 28 13:20:13.598571 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 28 13:20:13.598578 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 28 13:20:13.598585 kernel: NX (Execute Disable) protection: active Oct 28 13:20:13.598595 kernel: APIC: Static calls initialized Oct 28 13:20:13.598603 kernel: SMBIOS 2.8 present. Oct 28 13:20:13.598614 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 28 13:20:13.598621 kernel: DMI: Memory slots populated: 1/1 Oct 28 13:20:13.598629 kernel: Hypervisor detected: KVM Oct 28 13:20:13.598636 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 13:20:13.598651 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 28 13:20:13.598659 kernel: kvm-clock: using sched offset of 4223223221 cycles Oct 28 13:20:13.598668 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 28 13:20:13.598678 kernel: tsc: Detected 2794.748 MHz processor Oct 28 13:20:13.598686 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 28 13:20:13.598695 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 28 13:20:13.598703 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 28 13:20:13.598711 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 28 13:20:13.598719 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 28 13:20:13.598727 kernel: Using GB pages for direct mapping Oct 28 13:20:13.598734 kernel: ACPI: Early table checksum verification disabled Oct 28 13:20:13.598744 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 28 13:20:13.598752 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598760 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598768 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598776 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 28 13:20:13.598784 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598792 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598801 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598810 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 13:20:13.598821 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 28 13:20:13.598829 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 28 13:20:13.598837 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 28 13:20:13.598848 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 28 13:20:13.598856 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 28 13:20:13.598864 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 28 13:20:13.598872 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 28 13:20:13.598880 kernel: No NUMA configuration found Oct 28 13:20:13.598888 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 28 13:20:13.598898 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 28 13:20:13.598906 kernel: Zone ranges: Oct 28 13:20:13.598914 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 28 13:20:13.598923 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 28 13:20:13.598931 kernel: Normal empty Oct 28 13:20:13.598939 kernel: Device empty Oct 28 13:20:13.598947 kernel: Movable zone start for each node Oct 28 13:20:13.598955 kernel: Early memory node ranges Oct 28 13:20:13.598965 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 28 13:20:13.598973 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 28 13:20:13.598981 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 28 13:20:13.598989 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 28 13:20:13.598997 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 28 13:20:13.599005 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 28 13:20:13.599016 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 28 13:20:13.599027 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 28 13:20:13.599035 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 28 13:20:13.599043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 28 13:20:13.599053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 28 13:20:13.599061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 28 13:20:13.599069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 28 13:20:13.599078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 28 13:20:13.599087 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 28 13:20:13.599096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 28 13:20:13.599104 kernel: TSC deadline timer available Oct 28 13:20:13.599112 kernel: CPU topo: Max. logical packages: 1 Oct 28 13:20:13.599120 kernel: CPU topo: Max. logical dies: 1 Oct 28 13:20:13.599128 kernel: CPU topo: Max. dies per package: 1 Oct 28 13:20:13.599136 kernel: CPU topo: Max. threads per core: 1 Oct 28 13:20:13.599144 kernel: CPU topo: Num. cores per package: 4 Oct 28 13:20:13.599154 kernel: CPU topo: Num. threads per package: 4 Oct 28 13:20:13.599162 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 28 13:20:13.599170 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 28 13:20:13.599178 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 28 13:20:13.599186 kernel: kvm-guest: setup PV sched yield Oct 28 13:20:13.599194 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 28 13:20:13.599202 kernel: Booting paravirtualized kernel on KVM Oct 28 13:20:13.599212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 28 13:20:13.599221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 28 13:20:13.599252 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 28 13:20:13.599260 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 28 13:20:13.599268 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 28 13:20:13.599276 kernel: kvm-guest: PV spinlocks enabled Oct 28 13:20:13.599284 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 28 13:20:13.599295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:13.599304 kernel: random: crng init done Oct 28 13:20:13.599312 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 13:20:13.599320 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 13:20:13.599328 kernel: Fallback order for Node 0: 0 Oct 28 13:20:13.599343 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 28 13:20:13.599352 kernel: Policy zone: DMA32 Oct 28 13:20:13.599362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 13:20:13.599370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 13:20:13.599378 kernel: ftrace: allocating 40092 entries in 157 pages Oct 28 13:20:13.599386 kernel: ftrace: allocated 157 pages with 5 groups Oct 28 13:20:13.599394 kernel: Dynamic Preempt: voluntary Oct 28 13:20:13.599402 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 13:20:13.599415 kernel: rcu: RCU event tracing is enabled. Oct 28 13:20:13.599426 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 13:20:13.599434 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 13:20:13.599445 kernel: Rude variant of Tasks RCU enabled. Oct 28 13:20:13.599453 kernel: Tracing variant of Tasks RCU enabled. Oct 28 13:20:13.599461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 13:20:13.599469 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 13:20:13.599477 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:13.599486 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:13.599496 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 13:20:13.599504 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 28 13:20:13.599513 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 13:20:13.599527 kernel: Console: colour VGA+ 80x25 Oct 28 13:20:13.599538 kernel: printk: legacy console [ttyS0] enabled Oct 28 13:20:13.599546 kernel: ACPI: Core revision 20240827 Oct 28 13:20:13.599555 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 28 13:20:13.599563 kernel: APIC: Switch to symmetric I/O mode setup Oct 28 13:20:13.599571 kernel: x2apic enabled Oct 28 13:20:13.599582 kernel: APIC: Switched APIC routing to: physical x2apic Oct 28 13:20:13.599593 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 28 13:20:13.599602 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 28 13:20:13.599610 kernel: kvm-guest: setup PV IPIs Oct 28 13:20:13.599620 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 28 13:20:13.599629 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 13:20:13.599638 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 28 13:20:13.599646 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 28 13:20:13.599654 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 28 13:20:13.599663 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 28 13:20:13.599671 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 28 13:20:13.599682 kernel: Spectre V2 : Mitigation: Retpolines Oct 28 13:20:13.599690 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 28 13:20:13.599698 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 28 13:20:13.599707 kernel: active return thunk: retbleed_return_thunk Oct 28 13:20:13.599715 kernel: RETBleed: Mitigation: untrained return thunk Oct 28 13:20:13.599724 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 28 13:20:13.599732 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 28 13:20:13.599743 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 28 13:20:13.599752 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 28 13:20:13.599760 kernel: active return thunk: srso_return_thunk Oct 28 13:20:13.599769 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 28 13:20:13.599777 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 28 13:20:13.599786 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 28 13:20:13.599796 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 28 13:20:13.599805 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 28 13:20:13.599813 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 28 13:20:13.599822 kernel: Freeing SMP alternatives memory: 32K Oct 28 13:20:13.599830 kernel: pid_max: default: 32768 minimum: 301 Oct 28 13:20:13.599839 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 13:20:13.599847 kernel: landlock: Up and running. Oct 28 13:20:13.599857 kernel: SELinux: Initializing. Oct 28 13:20:13.599868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:20:13.599877 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 13:20:13.599885 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 28 13:20:13.599894 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 28 13:20:13.599902 kernel: ... version: 0 Oct 28 13:20:13.599910 kernel: ... bit width: 48 Oct 28 13:20:13.599919 kernel: ... generic registers: 6 Oct 28 13:20:13.599929 kernel: ... value mask: 0000ffffffffffff Oct 28 13:20:13.599937 kernel: ... max period: 00007fffffffffff Oct 28 13:20:13.599945 kernel: ... fixed-purpose events: 0 Oct 28 13:20:13.599954 kernel: ... event mask: 000000000000003f Oct 28 13:20:13.599962 kernel: signal: max sigframe size: 1776 Oct 28 13:20:13.599970 kernel: rcu: Hierarchical SRCU implementation. Oct 28 13:20:13.599979 kernel: rcu: Max phase no-delay instances is 400. Oct 28 13:20:13.599989 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 13:20:13.599998 kernel: smp: Bringing up secondary CPUs ... Oct 28 13:20:13.600006 kernel: smpboot: x86: Booting SMP configuration: Oct 28 13:20:13.600014 kernel: .... node #0, CPUs: #1 #2 #3 Oct 28 13:20:13.600023 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 13:20:13.600031 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 28 13:20:13.600040 kernel: Memory: 2451440K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15960K init, 2084K bss, 114376K reserved, 0K cma-reserved) Oct 28 13:20:13.600050 kernel: devtmpfs: initialized Oct 28 13:20:13.600059 kernel: x86/mm: Memory block size: 128MB Oct 28 13:20:13.600067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 13:20:13.600076 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 13:20:13.600084 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 13:20:13.600093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 13:20:13.600101 kernel: audit: initializing netlink subsys (disabled) Oct 28 13:20:13.600112 kernel: audit: type=2000 audit(1761657610.926:1): state=initialized audit_enabled=0 res=1 Oct 28 13:20:13.600120 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 13:20:13.600128 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 28 13:20:13.600137 kernel: cpuidle: using governor menu Oct 28 13:20:13.600145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 13:20:13.600153 kernel: dca service started, version 1.12.1 Oct 28 13:20:13.600162 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 28 13:20:13.600172 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 28 13:20:13.600181 kernel: PCI: Using configuration type 1 for base access Oct 28 13:20:13.600189 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 28 13:20:13.600198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 13:20:13.600206 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 13:20:13.600214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 13:20:13.600223 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 13:20:13.600248 kernel: ACPI: Added _OSI(Module Device) Oct 28 13:20:13.600256 kernel: ACPI: Added _OSI(Processor Device) Oct 28 13:20:13.600265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 13:20:13.600273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 13:20:13.600284 kernel: ACPI: Interpreter enabled Oct 28 13:20:13.600292 kernel: ACPI: PM: (supports S0 S3 S5) Oct 28 13:20:13.600300 kernel: ACPI: Using IOAPIC for interrupt routing Oct 28 13:20:13.600311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 28 13:20:13.600319 kernel: PCI: Using E820 reservations for host bridge windows Oct 28 13:20:13.600328 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 28 13:20:13.600342 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 13:20:13.600588 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 13:20:13.600770 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 28 13:20:13.600950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 28 13:20:13.600962 kernel: PCI host bridge to bus 0000:00 Oct 28 13:20:13.601137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 28 13:20:13.601316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 28 13:20:13.601485 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 28 13:20:13.601652 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 28 13:20:13.601815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 28 13:20:13.601988 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 28 13:20:13.602147 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 13:20:13.602362 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 28 13:20:13.602550 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 28 13:20:13.602728 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 28 13:20:13.602905 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 28 13:20:13.603075 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 28 13:20:13.603261 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 28 13:20:13.603455 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 13:20:13.603629 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 28 13:20:13.603807 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 28 13:20:13.603978 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 28 13:20:13.604166 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 28 13:20:13.604367 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 28 13:20:13.604544 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 28 13:20:13.604717 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 28 13:20:13.604905 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 28 13:20:13.605078 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 28 13:20:13.605276 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 28 13:20:13.605461 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 28 13:20:13.605638 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 28 13:20:13.605822 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 28 13:20:13.605999 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 28 13:20:13.606185 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 28 13:20:13.606393 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 28 13:20:13.606571 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 28 13:20:13.606751 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 28 13:20:13.606928 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 28 13:20:13.606940 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 28 13:20:13.606948 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 28 13:20:13.606957 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 28 13:20:13.606969 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 28 13:20:13.606977 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 28 13:20:13.606986 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 28 13:20:13.606997 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 28 13:20:13.607005 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 28 13:20:13.607013 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 28 13:20:13.607022 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 28 13:20:13.607030 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 28 13:20:13.607038 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 28 13:20:13.607047 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 28 13:20:13.607057 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 28 13:20:13.607065 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 28 13:20:13.607074 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 28 13:20:13.607082 kernel: iommu: Default domain type: Translated Oct 28 13:20:13.607090 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 28 13:20:13.607098 kernel: PCI: Using ACPI for IRQ routing Oct 28 13:20:13.607107 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 28 13:20:13.607117 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 28 13:20:13.607125 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 28 13:20:13.607316 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 28 13:20:13.607500 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 28 13:20:13.607681 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 28 13:20:13.607692 kernel: vgaarb: loaded Oct 28 13:20:13.607701 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 28 13:20:13.607714 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 28 13:20:13.607722 kernel: clocksource: Switched to clocksource kvm-clock Oct 28 13:20:13.607730 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 13:20:13.607739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 13:20:13.607747 kernel: pnp: PnP ACPI init Oct 28 13:20:13.607933 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 28 13:20:13.607949 kernel: pnp: PnP ACPI: found 6 devices Oct 28 13:20:13.607957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 28 13:20:13.607966 kernel: NET: Registered PF_INET protocol family Oct 28 13:20:13.607974 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 13:20:13.607982 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 13:20:13.607991 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 13:20:13.607999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 13:20:13.608010 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 13:20:13.608018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 13:20:13.608026 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:20:13.608035 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 13:20:13.608043 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 13:20:13.608051 kernel: NET: Registered PF_XDP protocol family Oct 28 13:20:13.608213 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 28 13:20:13.608403 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 28 13:20:13.608564 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 28 13:20:13.608722 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 28 13:20:13.608885 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 28 13:20:13.609042 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 28 13:20:13.609053 kernel: PCI: CLS 0 bytes, default 64 Oct 28 13:20:13.609062 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 28 13:20:13.609075 kernel: Initialise system trusted keyrings Oct 28 13:20:13.609083 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 13:20:13.609091 kernel: Key type asymmetric registered Oct 28 13:20:13.609100 kernel: Asymmetric key parser 'x509' registered Oct 28 13:20:13.609108 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 28 13:20:13.609117 kernel: io scheduler mq-deadline registered Oct 28 13:20:13.609125 kernel: io scheduler kyber registered Oct 28 13:20:13.609135 kernel: io scheduler bfq registered Oct 28 13:20:13.609144 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 28 13:20:13.609152 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 28 13:20:13.609161 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 28 13:20:13.609169 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 28 13:20:13.609177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 13:20:13.609186 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 28 13:20:13.609196 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 28 13:20:13.609205 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 28 13:20:13.609213 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 28 13:20:13.609418 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 28 13:20:13.609432 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 28 13:20:13.609595 kernel: rtc_cmos 00:04: registered as rtc0 Oct 28 13:20:13.609764 kernel: rtc_cmos 00:04: setting system clock to 2025-10-28T13:20:11 UTC (1761657611) Oct 28 13:20:13.609928 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 28 13:20:13.609940 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 28 13:20:13.609948 kernel: NET: Registered PF_INET6 protocol family Oct 28 13:20:13.609956 kernel: Segment Routing with IPv6 Oct 28 13:20:13.609964 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 13:20:13.609973 kernel: NET: Registered PF_PACKET protocol family Oct 28 13:20:13.609984 kernel: Key type dns_resolver registered Oct 28 13:20:13.609992 kernel: IPI shorthand broadcast: enabled Oct 28 13:20:13.610001 kernel: sched_clock: Marking stable (1334005299, 203758904)->(1596201237, -58437034) Oct 28 13:20:13.610009 kernel: registered taskstats version 1 Oct 28 13:20:13.610017 kernel: Loading compiled-in X.509 certificates Oct 28 13:20:13.610025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cdff28e8ecdc0a80eff4a5776c5a29d2ceff67c8' Oct 28 13:20:13.610034 kernel: Demotion targets for Node 0: null Oct 28 13:20:13.610045 kernel: Key type .fscrypt registered Oct 28 13:20:13.610053 kernel: Key type fscrypt-provisioning registered Oct 28 13:20:13.610061 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 13:20:13.610070 kernel: ima: Allocated hash algorithm: sha1 Oct 28 13:20:13.610078 kernel: ima: No architecture policies found Oct 28 13:20:13.610086 kernel: clk: Disabling unused clocks Oct 28 13:20:13.610095 kernel: Freeing unused kernel image (initmem) memory: 15960K Oct 28 13:20:13.610103 kernel: Write protecting the kernel read-only data: 40960k Oct 28 13:20:13.610117 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 28 13:20:13.610126 kernel: Run /init as init process Oct 28 13:20:13.610134 kernel: with arguments: Oct 28 13:20:13.610142 kernel: /init Oct 28 13:20:13.610150 kernel: with environment: Oct 28 13:20:13.610158 kernel: HOME=/ Oct 28 13:20:13.610166 kernel: TERM=linux Oct 28 13:20:13.610177 kernel: SCSI subsystem initialized Oct 28 13:20:13.610185 kernel: libata version 3.00 loaded. Oct 28 13:20:13.610382 kernel: ahci 0000:00:1f.2: version 3.0 Oct 28 13:20:13.610412 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 28 13:20:13.610586 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 28 13:20:13.610759 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 28 13:20:13.610941 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 28 13:20:13.611135 kernel: scsi host0: ahci Oct 28 13:20:13.611352 kernel: scsi host1: ahci Oct 28 13:20:13.611540 kernel: scsi host2: ahci Oct 28 13:20:13.611725 kernel: scsi host3: ahci Oct 28 13:20:13.611916 kernel: scsi host4: ahci Oct 28 13:20:13.612100 kernel: scsi host5: ahci Oct 28 13:20:13.612113 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 28 13:20:13.612123 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 28 13:20:13.612131 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 28 13:20:13.612140 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 28 13:20:13.612149 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 28 13:20:13.612161 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 28 13:20:13.612169 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:13.612178 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 28 13:20:13.612187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:13.612196 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:13.612204 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:13.612213 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 28 13:20:13.612237 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:20:13.612246 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 28 13:20:13.612255 kernel: ata3.00: applying bridge limits Oct 28 13:20:13.612264 kernel: ata3.00: LPM support broken, forcing max_power Oct 28 13:20:13.612272 kernel: ata3.00: configured for UDMA/100 Oct 28 13:20:13.612492 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 28 13:20:13.612682 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 28 13:20:13.612859 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 13:20:13.612871 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 13:20:13.612880 kernel: GPT:16515071 != 27000831 Oct 28 13:20:13.612889 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 13:20:13.612898 kernel: GPT:16515071 != 27000831 Oct 28 13:20:13.612906 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 13:20:13.612918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 13:20:13.612927 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613178 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 28 13:20:13.613192 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 28 13:20:13.613410 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 28 13:20:13.613423 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 13:20:13.613436 kernel: device-mapper: uevent: version 1.0.3 Oct 28 13:20:13.613445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 13:20:13.613454 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 28 13:20:13.613466 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613474 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613485 kernel: raid6: avx2x4 gen() 29469 MB/s Oct 28 13:20:13.613494 kernel: raid6: avx2x2 gen() 31166 MB/s Oct 28 13:20:13.613502 kernel: raid6: avx2x1 gen() 25883 MB/s Oct 28 13:20:13.613511 kernel: raid6: using algorithm avx2x2 gen() 31166 MB/s Oct 28 13:20:13.613520 kernel: raid6: .... xor() 19909 MB/s, rmw enabled Oct 28 13:20:13.613528 kernel: raid6: using avx2x2 recovery algorithm Oct 28 13:20:13.613537 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613549 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613558 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613566 kernel: xor: automatically using best checksumming function avx Oct 28 13:20:13.613575 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613583 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 13:20:13.613592 kernel: BTRFS: device fsid af35db37-e08e-4bd7-9f3a-b576d01d2613 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (176) Oct 28 13:20:13.613601 kernel: BTRFS info (device dm-0): first mount of filesystem af35db37-e08e-4bd7-9f3a-b576d01d2613 Oct 28 13:20:13.613610 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:13.613621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 13:20:13.613630 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 13:20:13.613638 kernel: Invalid ELF header magic: != \u007fELF Oct 28 13:20:13.613647 kernel: loop: module loaded Oct 28 13:20:13.613656 kernel: loop0: detected capacity change from 0 to 100120 Oct 28 13:20:13.613664 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 13:20:13.613674 systemd[1]: Successfully made /usr/ read-only. Oct 28 13:20:13.613688 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:20:13.613698 systemd[1]: Detected virtualization kvm. Oct 28 13:20:13.613707 systemd[1]: Detected architecture x86-64. Oct 28 13:20:13.613716 systemd[1]: Running in initrd. Oct 28 13:20:13.613724 systemd[1]: No hostname configured, using default hostname. Oct 28 13:20:13.613736 systemd[1]: Hostname set to . Oct 28 13:20:13.613745 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:20:13.613754 systemd[1]: Queued start job for default target initrd.target. Oct 28 13:20:13.613763 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:20:13.613772 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:13.613782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:13.613792 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 13:20:13.613803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:20:13.613813 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 13:20:13.613823 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 13:20:13.613832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:13.613841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:13.613851 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:20:13.613862 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:20:13.613871 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:20:13.613880 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:20:13.613889 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:20:13.613899 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:20:13.613908 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:20:13.613917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 13:20:13.613929 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 13:20:13.613938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:13.613947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:13.613956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:13.613966 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:20:13.613976 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 13:20:13.613987 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 13:20:13.613996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:20:13.614006 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 13:20:13.614015 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 13:20:13.614025 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 13:20:13.614034 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:20:13.614043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:20:13.614054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:13.614064 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 13:20:13.614073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:13.614083 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 13:20:13.614095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 13:20:13.614126 systemd-journald[313]: Collecting audit messages is disabled. Oct 28 13:20:13.614149 systemd-journald[313]: Journal started Oct 28 13:20:13.614175 systemd-journald[313]: Runtime Journal (/run/log/journal/efc729525a2840e2883a6c8c290033ac) is 6M, max 48.3M, 42.2M free. Oct 28 13:20:13.617391 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:20:13.619386 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:20:13.622702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 13:20:13.628425 systemd-modules-load[314]: Inserted module 'br_netfilter' Oct 28 13:20:13.692351 kernel: Bridge firewalling registered Oct 28 13:20:13.690034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:13.704525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:13.708132 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:13.711700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 13:20:13.713580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:20:13.718995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:20:13.729361 systemd-tmpfiles[329]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 13:20:13.731981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:13.739318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:20:13.743327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:13.746919 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:13.749990 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 13:20:13.754362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:20:13.782198 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3b5773c335d9782dd41351ceb8da09cfd1ec290db8d35827245f7b6eed48895b Oct 28 13:20:13.809992 systemd-resolved[353]: Positive Trust Anchors: Oct 28 13:20:13.810007 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:20:13.810011 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:20:13.810045 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:20:13.827295 systemd-resolved[353]: Defaulting to hostname 'linux'. Oct 28 13:20:13.832188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:20:13.832884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:13.916265 kernel: Loading iSCSI transport class v2.0-870. Oct 28 13:20:13.930270 kernel: iscsi: registered transport (tcp) Oct 28 13:20:13.953573 kernel: iscsi: registered transport (qla4xxx) Oct 28 13:20:13.953621 kernel: QLogic iSCSI HBA Driver Oct 28 13:20:13.980086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:20:13.998767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:14.004029 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:20:14.059496 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 13:20:14.061749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 13:20:14.063971 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 13:20:14.107244 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:20:14.112561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:14.150353 systemd-udevd[595]: Using default interface naming scheme 'v257'. Oct 28 13:20:14.165287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:14.171136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 13:20:14.196817 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:20:14.199639 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:20:14.207195 dracut-pre-trigger[670]: rd.md=0: removing MD RAID activation Oct 28 13:20:14.239522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:20:14.243007 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:20:14.254436 systemd-networkd[704]: lo: Link UP Oct 28 13:20:14.254444 systemd-networkd[704]: lo: Gained carrier Oct 28 13:20:14.254988 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:20:14.256665 systemd[1]: Reached target network.target - Network. Oct 28 13:20:14.342429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:14.348333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 13:20:14.389831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 13:20:14.410149 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 13:20:14.431027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 13:20:14.437271 kernel: cryptd: max_cpu_qlen set to 1000 Oct 28 13:20:14.456672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:20:14.457819 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 28 13:20:14.463969 kernel: AES CTR mode by8 optimization enabled Oct 28 13:20:14.464269 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 13:20:14.468644 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:14.469395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:14.469508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:14.469689 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:20:14.469974 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:14.470277 systemd-networkd[704]: eth0: Link UP Oct 28 13:20:14.471710 systemd-networkd[704]: eth0: Gained carrier Oct 28 13:20:14.471719 systemd-networkd[704]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:14.474080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:14.499298 systemd-networkd[704]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:20:14.511205 disk-uuid[833]: Primary Header is updated. Oct 28 13:20:14.511205 disk-uuid[833]: Secondary Entries is updated. Oct 28 13:20:14.511205 disk-uuid[833]: Secondary Header is updated. Oct 28 13:20:14.592556 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 13:20:14.597863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:14.599652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:20:14.602202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:14.605913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:20:14.612879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 13:20:14.645469 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:20:14.800997 systemd-resolved[353]: Detected conflict on linux IN A 10.0.0.148 Oct 28 13:20:14.801016 systemd-resolved[353]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Oct 28 13:20:15.551903 disk-uuid[836]: Warning: The kernel is still using the old partition table. Oct 28 13:20:15.551903 disk-uuid[836]: The new table will be used at the next reboot or after you Oct 28 13:20:15.551903 disk-uuid[836]: run partprobe(8) or kpartx(8) Oct 28 13:20:15.551903 disk-uuid[836]: The operation has completed successfully. Oct 28 13:20:15.569750 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 13:20:15.569895 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 13:20:15.574379 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 13:20:15.599383 systemd-networkd[704]: eth0: Gained IPv6LL Oct 28 13:20:15.715254 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Oct 28 13:20:15.715340 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:15.715354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:15.720509 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:15.720542 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:15.729265 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:15.730271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 13:20:15.734560 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 13:20:16.049332 ignition[880]: Ignition 2.22.0 Oct 28 13:20:16.049355 ignition[880]: Stage: fetch-offline Oct 28 13:20:16.049431 ignition[880]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:16.049449 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:16.049600 ignition[880]: parsed url from cmdline: "" Oct 28 13:20:16.049609 ignition[880]: no config URL provided Oct 28 13:20:16.049615 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 13:20:16.049635 ignition[880]: no config at "/usr/lib/ignition/user.ign" Oct 28 13:20:16.049711 ignition[880]: op(1): [started] loading QEMU firmware config module Oct 28 13:20:16.049716 ignition[880]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 13:20:16.068191 ignition[880]: op(1): [finished] loading QEMU firmware config module Oct 28 13:20:16.150272 ignition[880]: parsing config with SHA512: 5cca0c9e203575541e820a706a9d1f45459575f9a407fb48174eb77ce35a0a3b2a1f0b9dada14f1589ec9c391e458a35852f5225e618c2f5a92c2aefd3396435 Oct 28 13:20:16.154522 unknown[880]: fetched base config from "system" Oct 28 13:20:16.154537 unknown[880]: fetched user config from "qemu" Oct 28 13:20:16.154963 ignition[880]: fetch-offline: fetch-offline passed Oct 28 13:20:16.155036 ignition[880]: Ignition finished successfully Oct 28 13:20:16.161823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:20:16.166014 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 13:20:16.169962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 13:20:16.235554 ignition[890]: Ignition 2.22.0 Oct 28 13:20:16.235566 ignition[890]: Stage: kargs Oct 28 13:20:16.235730 ignition[890]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:16.235740 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:16.236460 ignition[890]: kargs: kargs passed Oct 28 13:20:16.236510 ignition[890]: Ignition finished successfully Oct 28 13:20:16.246769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 13:20:16.250114 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 13:20:16.311788 ignition[898]: Ignition 2.22.0 Oct 28 13:20:16.311803 ignition[898]: Stage: disks Oct 28 13:20:16.311960 ignition[898]: no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:16.311971 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:16.317479 ignition[898]: disks: disks passed Oct 28 13:20:16.317538 ignition[898]: Ignition finished successfully Oct 28 13:20:16.322672 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 13:20:16.326133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 13:20:16.326832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 13:20:16.330302 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:20:16.330884 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:20:16.337022 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:20:16.342135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 13:20:16.397208 systemd-fsck[908]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 13:20:16.406058 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 13:20:16.408887 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 13:20:16.537276 kernel: EXT4-fs (vda9): mounted filesystem 533620cd-204e-4567-a68e-d0b19b60f72c r/w with ordered data mode. Quota mode: none. Oct 28 13:20:16.538261 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 13:20:16.539674 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 13:20:16.543522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:20:16.547510 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 13:20:16.549155 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 13:20:16.549248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 13:20:16.549302 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:20:16.567654 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 13:20:16.572575 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Oct 28 13:20:16.570374 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 13:20:16.581198 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:16.581222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:16.581319 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:16.581331 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:16.582495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:20:16.629448 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 13:20:16.634126 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Oct 28 13:20:16.640886 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 13:20:16.648011 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 13:20:16.751020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 13:20:16.753160 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 13:20:16.756028 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 13:20:16.797641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 13:20:16.800464 kernel: BTRFS info (device vda6): last unmount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:16.813381 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 13:20:16.840528 ignition[1031]: INFO : Ignition 2.22.0 Oct 28 13:20:16.840528 ignition[1031]: INFO : Stage: mount Oct 28 13:20:16.843196 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:16.843196 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:16.843196 ignition[1031]: INFO : mount: mount passed Oct 28 13:20:16.843196 ignition[1031]: INFO : Ignition finished successfully Oct 28 13:20:16.846272 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 13:20:16.848918 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 13:20:16.874477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 13:20:16.898267 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1043) Oct 28 13:20:16.901492 kernel: BTRFS info (device vda6): first mount of filesystem 92fe034e-39d5-4cce-8f91-7653ce0986c3 Oct 28 13:20:16.901527 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 28 13:20:16.905318 kernel: BTRFS info (device vda6): turning on async discard Oct 28 13:20:16.905357 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 13:20:16.907199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 13:20:16.956477 ignition[1060]: INFO : Ignition 2.22.0 Oct 28 13:20:16.956477 ignition[1060]: INFO : Stage: files Oct 28 13:20:16.958965 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:16.958965 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:16.963129 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Oct 28 13:20:16.965352 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 13:20:16.965352 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 13:20:16.973619 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 13:20:16.975901 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 13:20:16.978223 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 13:20:16.976629 unknown[1060]: wrote ssh authorized keys file for user: core Oct 28 13:20:16.982415 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 13:20:16.982415 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 28 13:20:17.025995 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 13:20:17.132517 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 28 13:20:17.132517 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:20:17.141570 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 13:20:17.161081 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 28 13:20:17.613071 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 13:20:18.593799 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 28 13:20:18.593799 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 13:20:18.600734 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:20:18.732721 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 13:20:18.732721 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 13:20:18.732721 ignition[1060]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 13:20:18.741044 ignition[1060]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:20:18.741044 ignition[1060]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 13:20:18.741044 ignition[1060]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 13:20:18.741044 ignition[1060]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 13:20:18.766726 ignition[1060]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:20:18.773919 ignition[1060]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 13:20:18.776675 ignition[1060]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 13:20:18.776675 ignition[1060]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 13:20:18.781118 ignition[1060]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 13:20:18.781118 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:20:18.781118 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 13:20:18.781118 ignition[1060]: INFO : files: files passed Oct 28 13:20:18.781118 ignition[1060]: INFO : Ignition finished successfully Oct 28 13:20:18.785110 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 13:20:18.788173 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 13:20:18.795114 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 13:20:18.850197 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 13:20:18.850378 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 13:20:18.860406 initrd-setup-root-after-ignition[1093]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 13:20:18.866351 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:18.868949 initrd-setup-root-after-ignition[1095]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:18.871403 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 13:20:18.876086 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:20:18.880253 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 13:20:18.883815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 13:20:18.960880 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 13:20:18.962590 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 13:20:18.966942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 13:20:18.970256 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 13:20:18.974196 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 13:20:18.977660 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 13:20:19.016369 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:20:19.019299 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 13:20:19.040959 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 13:20:19.041109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:19.043805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:19.044645 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 13:20:19.050833 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 13:20:19.051025 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 13:20:19.056360 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 13:20:19.057244 systemd[1]: Stopped target basic.target - Basic System. Oct 28 13:20:19.061717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 13:20:19.064679 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 13:20:19.067853 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 13:20:19.071745 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 13:20:19.074830 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 13:20:19.078160 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 13:20:19.081279 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 13:20:19.085023 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 13:20:19.088179 systemd[1]: Stopped target swap.target - Swaps. Oct 28 13:20:19.091185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 13:20:19.091419 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 13:20:19.096369 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:19.097192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:19.102848 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 13:20:19.104754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:19.105974 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 13:20:19.106122 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 13:20:19.113683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 13:20:19.113849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 13:20:19.115882 systemd[1]: Stopped target paths.target - Path Units. Oct 28 13:20:19.118839 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 13:20:19.125343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:19.127559 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 13:20:19.131296 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 13:20:19.132393 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 13:20:19.132603 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 13:20:19.136319 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 13:20:19.136489 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 13:20:19.139217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 13:20:19.139470 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 13:20:19.142326 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 13:20:19.142540 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 13:20:19.152085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 13:20:19.155902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 13:20:19.161467 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 13:20:19.161699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:19.164144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 13:20:19.164287 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:19.165001 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 13:20:19.165105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 13:20:19.180050 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 13:20:19.180188 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 13:20:19.257779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 13:20:19.267079 ignition[1119]: INFO : Ignition 2.22.0 Oct 28 13:20:19.268855 ignition[1119]: INFO : Stage: umount Oct 28 13:20:19.268855 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 13:20:19.268855 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 13:20:19.274452 ignition[1119]: INFO : umount: umount passed Oct 28 13:20:19.274452 ignition[1119]: INFO : Ignition finished successfully Oct 28 13:20:19.274639 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 13:20:19.274846 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 13:20:19.276067 systemd[1]: Stopped target network.target - Network. Oct 28 13:20:19.279796 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 13:20:19.279872 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 13:20:19.282832 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 13:20:19.282890 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 13:20:19.285617 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 13:20:19.285674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 13:20:19.286309 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 13:20:19.286355 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 13:20:19.286925 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 13:20:19.294410 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 13:20:19.306486 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 13:20:19.306753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 13:20:19.320701 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 13:20:19.320898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 13:20:19.328908 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 13:20:19.331128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 13:20:19.331195 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:19.335094 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 13:20:19.337947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 13:20:19.338601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 13:20:19.347837 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 13:20:19.347914 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:19.348659 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 13:20:19.348716 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:19.349272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:19.358078 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 13:20:19.365406 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 13:20:19.366563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 13:20:19.366668 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 13:20:19.380121 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 13:20:19.380386 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:19.383007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 13:20:19.383066 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:19.386943 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 13:20:19.386986 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:19.389865 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 13:20:19.389924 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 13:20:19.395701 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 13:20:19.395754 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 13:20:19.400101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 13:20:19.400158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 13:20:19.405832 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 13:20:19.407097 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 13:20:19.407168 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:19.410636 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 13:20:19.410703 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:19.414919 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 28 13:20:19.414974 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:19.418656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 13:20:19.418709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:19.421916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 13:20:19.421971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:19.430528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 13:20:19.430644 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 13:20:19.466612 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 13:20:19.466740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 13:20:19.471285 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 13:20:19.474844 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 13:20:19.504438 systemd[1]: Switching root. Oct 28 13:20:19.542935 systemd-journald[313]: Journal stopped Oct 28 13:20:20.869840 systemd-journald[313]: Received SIGTERM from PID 1 (systemd). Oct 28 13:20:20.869910 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 13:20:20.869930 kernel: SELinux: policy capability open_perms=1 Oct 28 13:20:20.869946 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 13:20:20.869962 kernel: SELinux: policy capability always_check_network=0 Oct 28 13:20:20.869975 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 13:20:20.869987 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 13:20:20.869999 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 13:20:20.870011 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 13:20:20.870023 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 13:20:20.870038 kernel: audit: type=1403 audit(1761657619.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 13:20:20.870051 systemd[1]: Successfully loaded SELinux policy in 63.742ms. Oct 28 13:20:20.870084 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.098ms. Oct 28 13:20:20.870099 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 13:20:20.870117 systemd[1]: Detected virtualization kvm. Oct 28 13:20:20.870130 systemd[1]: Detected architecture x86-64. Oct 28 13:20:20.870144 systemd[1]: Detected first boot. Oct 28 13:20:20.870159 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 13:20:20.870185 zram_generator::config[1165]: No configuration found. Oct 28 13:20:20.870202 kernel: Guest personality initialized and is inactive Oct 28 13:20:20.870218 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 28 13:20:20.870266 kernel: Initialized host personality Oct 28 13:20:20.870280 kernel: NET: Registered PF_VSOCK protocol family Oct 28 13:20:20.870292 systemd[1]: Populated /etc with preset unit settings. Oct 28 13:20:20.870309 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 13:20:20.870322 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 13:20:20.870336 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 13:20:20.870350 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 13:20:20.870363 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 13:20:20.870376 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 13:20:20.870393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 13:20:20.870408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 13:20:20.870422 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 13:20:20.870439 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 13:20:20.870451 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 13:20:20.870465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 13:20:20.870479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 13:20:20.870492 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 13:20:20.870507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 13:20:20.870520 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 13:20:20.870533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 13:20:20.870547 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 28 13:20:20.870560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 13:20:20.870573 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 13:20:20.870590 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 13:20:20.870604 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 13:20:20.870616 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 13:20:20.870639 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 13:20:20.870653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 13:20:20.870666 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 13:20:20.870679 systemd[1]: Reached target slices.target - Slice Units. Oct 28 13:20:20.870694 systemd[1]: Reached target swap.target - Swaps. Oct 28 13:20:20.870707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 13:20:20.870720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 13:20:20.870733 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 13:20:20.870746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 13:20:20.870759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 13:20:20.870772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 13:20:20.870788 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 13:20:20.870801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 13:20:20.870814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 13:20:20.870827 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 13:20:20.870840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:20.870853 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 13:20:20.870866 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 13:20:20.870881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 13:20:20.870895 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 13:20:20.870908 systemd[1]: Reached target machines.target - Containers. Oct 28 13:20:20.870929 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 13:20:20.870942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:20.870957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 13:20:20.870972 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 13:20:20.870988 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:20:20.871001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:20:20.871015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:20:20.871028 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 13:20:20.871042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:20:20.871055 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 13:20:20.871070 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 13:20:20.871083 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 13:20:20.871096 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 13:20:20.871109 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 13:20:20.871122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:20.871135 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 13:20:20.871148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 13:20:20.871164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 13:20:20.871187 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 13:20:20.871200 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 13:20:20.871212 kernel: ACPI: bus type drm_connector registered Oct 28 13:20:20.871247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 13:20:20.871264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:20.871278 kernel: fuse: init (API version 7.41) Oct 28 13:20:20.871290 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 13:20:20.871304 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 13:20:20.871317 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 13:20:20.871330 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 13:20:20.871346 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 13:20:20.871386 systemd-journald[1241]: Collecting audit messages is disabled. Oct 28 13:20:20.871410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 13:20:20.871425 systemd-journald[1241]: Journal started Oct 28 13:20:20.871448 systemd-journald[1241]: Runtime Journal (/run/log/journal/efc729525a2840e2883a6c8c290033ac) is 6M, max 48.3M, 42.2M free. Oct 28 13:20:20.545514 systemd[1]: Queued start job for default target multi-user.target. Oct 28 13:20:20.557708 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 13:20:20.558378 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 13:20:20.876340 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 13:20:20.877930 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 13:20:20.880117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 13:20:20.882388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 13:20:20.882617 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 13:20:20.884756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:20:20.884974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:20:20.887055 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:20:20.887322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:20:20.889632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:20:20.889849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:20:20.892031 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 13:20:20.892385 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 13:20:20.894415 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:20:20.894636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:20:20.896780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 13:20:20.899004 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 13:20:20.902341 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 13:20:20.904770 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 13:20:20.921899 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 13:20:20.924399 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 13:20:20.927710 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 13:20:20.930562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 13:20:20.932330 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 13:20:20.932446 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 13:20:20.935116 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 13:20:20.937321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:20.946363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 13:20:20.949104 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 13:20:20.951062 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:20:20.952544 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 13:20:20.954380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:20:20.956513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 13:20:20.961209 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 13:20:20.963120 systemd-journald[1241]: Time spent on flushing to /var/log/journal/efc729525a2840e2883a6c8c290033ac is 27.038ms for 975 entries. Oct 28 13:20:20.963120 systemd-journald[1241]: System Journal (/var/log/journal/efc729525a2840e2883a6c8c290033ac) is 8M, max 163.5M, 155.5M free. Oct 28 13:20:21.039970 systemd-journald[1241]: Received client request to flush runtime journal. Oct 28 13:20:21.040018 kernel: loop1: detected capacity change from 0 to 118328 Oct 28 13:20:20.964588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 13:20:20.968300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 13:20:20.971493 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 13:20:20.973511 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 13:20:21.024788 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 13:20:21.027727 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 13:20:21.031587 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 13:20:21.036394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 13:20:21.039220 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 13:20:21.039256 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Oct 28 13:20:21.047905 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 13:20:21.051010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 13:20:21.055262 kernel: loop2: detected capacity change from 0 to 110984 Oct 28 13:20:21.057142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 13:20:21.074464 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 13:20:21.099283 kernel: loop3: detected capacity change from 0 to 219144 Oct 28 13:20:21.104248 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 13:20:21.108684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 13:20:21.111606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 13:20:21.132134 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 13:20:21.132271 kernel: loop4: detected capacity change from 0 to 118328 Oct 28 13:20:21.145285 kernel: loop5: detected capacity change from 0 to 110984 Oct 28 13:20:21.152216 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Oct 28 13:20:21.152262 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Oct 28 13:20:21.225500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 13:20:21.229258 kernel: loop6: detected capacity change from 0 to 219144 Oct 28 13:20:21.236306 (sd-merge)[1309]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 13:20:21.240388 (sd-merge)[1309]: Merged extensions into '/usr'. Oct 28 13:20:21.245313 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 13:20:21.245332 systemd[1]: Reloading... Oct 28 13:20:21.287280 zram_generator::config[1345]: No configuration found. Oct 28 13:20:21.421700 systemd-resolved[1307]: Positive Trust Anchors: Oct 28 13:20:21.422146 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 13:20:21.422165 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 13:20:21.422197 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 13:20:21.428417 systemd-resolved[1307]: Defaulting to hostname 'linux'. Oct 28 13:20:21.571498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 13:20:21.571607 systemd[1]: Reloading finished in 325 ms. Oct 28 13:20:21.613820 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 13:20:21.616171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 13:20:21.618485 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 13:20:21.623597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 13:20:21.644575 systemd[1]: Starting ensure-sysext.service... Oct 28 13:20:21.647775 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 13:20:21.671645 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Oct 28 13:20:21.671666 systemd[1]: Reloading... Oct 28 13:20:21.676569 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 13:20:21.676614 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 13:20:21.677008 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 13:20:21.677518 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 13:20:21.678595 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 13:20:21.678879 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 28 13:20:21.678966 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 28 13:20:21.685326 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:20:21.685339 systemd-tmpfiles[1382]: Skipping /boot Oct 28 13:20:21.696965 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 13:20:21.696982 systemd-tmpfiles[1382]: Skipping /boot Oct 28 13:20:21.729266 zram_generator::config[1412]: No configuration found. Oct 28 13:20:21.941840 systemd[1]: Reloading finished in 269 ms. Oct 28 13:20:21.963133 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 13:20:21.997977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 13:20:22.009583 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:20:22.012857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 13:20:22.015868 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 13:20:22.022591 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 13:20:22.028743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 13:20:22.032453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 13:20:22.037120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.037647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:22.046580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 13:20:22.050545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 13:20:22.055085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 13:20:22.056916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:22.057028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:22.057123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.063725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.063939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:22.064176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:22.064523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:22.064931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.071537 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 13:20:22.075176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.075869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 13:20:22.082784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 13:20:22.086198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 13:20:22.086374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 13:20:22.086621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 28 13:20:22.087544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 13:20:22.088493 systemd-udevd[1455]: Using default interface naming scheme 'v257'. Oct 28 13:20:22.089390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 13:20:22.092390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 13:20:22.095066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 13:20:22.095326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 13:20:22.098248 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 13:20:22.104063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 13:20:22.117032 systemd[1]: Finished ensure-sysext.service. Oct 28 13:20:22.118792 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 13:20:22.119085 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 13:20:22.122610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 13:20:22.122715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 13:20:22.124984 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 13:20:22.133399 augenrules[1489]: No rules Oct 28 13:20:22.132462 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:20:22.133448 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:20:22.138363 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 13:20:22.143577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 13:20:22.151442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 13:20:22.153147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 13:20:22.312880 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 13:20:22.315969 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 13:20:22.339993 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 28 13:20:22.356116 systemd-networkd[1501]: lo: Link UP Oct 28 13:20:22.356142 systemd-networkd[1501]: lo: Gained carrier Oct 28 13:20:22.360753 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:22.360834 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 13:20:22.360846 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 13:20:22.362762 systemd-networkd[1501]: eth0: Link UP Oct 28 13:20:22.363404 systemd-networkd[1501]: eth0: Gained carrier Oct 28 13:20:22.363473 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 13:20:22.368925 systemd[1]: Reached target network.target - Network. Oct 28 13:20:22.374732 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 13:20:22.398675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 13:20:22.408620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 13:20:22.411060 systemd-networkd[1501]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 13:20:22.413831 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Oct 28 13:20:23.253206 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 13:20:23.253266 systemd-resolved[1307]: Clock change detected. Flushing caches. Oct 28 13:20:23.254172 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 13:20:23.256090 systemd-timesyncd[1487]: Initial clock synchronization to Tue 2025-10-28 13:20:23.253105 UTC. Oct 28 13:20:23.274037 kernel: mousedev: PS/2 mouse device common for all mice Oct 28 13:20:23.278165 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 13:20:23.288101 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 28 13:20:23.289647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 13:20:23.293039 kernel: ACPI: button: Power Button [PWRF] Oct 28 13:20:23.303021 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 28 13:20:23.309036 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 28 13:20:23.456055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 13:20:23.600770 kernel: kvm_amd: TSC scaling supported Oct 28 13:20:23.600907 kernel: kvm_amd: Nested Virtualization enabled Oct 28 13:20:23.600934 kernel: kvm_amd: Nested Paging enabled Oct 28 13:20:23.600951 kernel: kvm_amd: LBR virtualization supported Oct 28 13:20:23.604140 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 28 13:20:23.604722 kernel: kvm_amd: Virtual GIF supported Oct 28 13:20:23.638039 kernel: EDAC MC: Ver: 3.0.0 Oct 28 13:20:23.648193 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 13:20:23.659205 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 13:20:23.699245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 13:20:23.705554 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 13:20:23.733299 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 13:20:23.735831 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 13:20:23.737883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 13:20:23.740119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 13:20:23.742331 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 28 13:20:23.744591 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 13:20:23.746596 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 13:20:23.748807 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 13:20:23.751047 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 13:20:23.751094 systemd[1]: Reached target paths.target - Path Units. Oct 28 13:20:23.752679 systemd[1]: Reached target timers.target - Timer Units. Oct 28 13:20:23.755546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 13:20:23.759464 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 13:20:23.763385 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 13:20:23.765532 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 13:20:23.767498 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 13:20:23.775139 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 13:20:23.777277 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 13:20:23.779873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 13:20:23.782479 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 13:20:23.783991 systemd[1]: Reached target basic.target - Basic System. Oct 28 13:20:23.785505 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:20:23.785544 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 13:20:23.786875 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 13:20:23.789690 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 13:20:23.792370 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 13:20:23.795227 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 13:20:23.797952 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 13:20:23.799587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 13:20:23.800723 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 28 13:20:23.803954 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 13:20:23.807470 jq[1567]: false Oct 28 13:20:23.809200 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 13:20:23.814335 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 13:20:23.818170 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Oct 28 13:20:23.819320 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Oct 28 13:20:23.818306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 13:20:23.821659 extend-filesystems[1568]: Found /dev/vda6 Oct 28 13:20:23.824153 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 13:20:23.825825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 13:20:23.826407 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 13:20:23.827669 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 13:20:23.830530 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Oct 28 13:20:23.830530 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:20:23.830530 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Oct 28 13:20:23.830344 oslogin_cache_refresh[1569]: Failure getting users, quitting Oct 28 13:20:23.830375 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 28 13:20:23.830444 oslogin_cache_refresh[1569]: Refreshing group entry cache Oct 28 13:20:23.831912 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 13:20:23.834156 extend-filesystems[1568]: Found /dev/vda9 Oct 28 13:20:23.840304 extend-filesystems[1568]: Checking size of /dev/vda9 Oct 28 13:20:23.841497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 13:20:23.843867 oslogin_cache_refresh[1569]: Failure getting groups, quitting Oct 28 13:20:23.845380 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Oct 28 13:20:23.845380 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:20:23.842915 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 13:20:23.843882 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 28 13:20:23.843182 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 13:20:23.845660 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 28 13:20:23.845933 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 28 13:20:23.848760 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 13:20:23.849375 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 13:20:23.851497 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 13:20:23.851770 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 13:20:23.853765 jq[1586]: true Oct 28 13:20:23.884035 jq[1597]: true Oct 28 13:20:24.036218 update_engine[1582]: I20251028 13:20:24.036113 1582 main.cc:92] Flatcar Update Engine starting Oct 28 13:20:24.048228 tar[1592]: linux-amd64/LICENSE Oct 28 13:20:24.049623 tar[1592]: linux-amd64/helm Oct 28 13:20:24.052479 extend-filesystems[1568]: Resized partition /dev/vda9 Oct 28 13:20:24.056750 extend-filesystems[1621]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 13:20:24.072022 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 13:20:24.085512 dbus-daemon[1565]: [system] SELinux support is enabled Oct 28 13:20:24.088191 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 13:20:24.105674 update_engine[1582]: I20251028 13:20:24.105602 1582 update_check_scheduler.cc:74] Next update check in 10m4s Oct 28 13:20:24.105674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 13:20:24.105715 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 13:20:24.108519 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 13:20:24.108540 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 13:20:24.110903 systemd[1]: Started update-engine.service - Update Engine. Oct 28 13:20:24.114616 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 13:20:24.131073 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Oct 28 13:20:24.131104 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 28 13:20:24.131467 systemd-logind[1580]: New seat seat0. Oct 28 13:20:24.135636 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 13:20:24.141448 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 13:20:24.152999 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Oct 28 13:20:24.156670 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 13:20:24.159855 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 13:20:24.227430 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 13:20:24.227430 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 13:20:24.227430 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 13:20:24.233173 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Oct 28 13:20:24.232231 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 13:20:24.234685 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 13:20:24.232530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 13:20:24.242039 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 13:20:24.262028 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 13:20:24.267639 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 13:20:24.290311 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 13:20:24.290674 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 13:20:24.295197 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 13:20:24.369367 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 13:20:24.375282 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 13:20:24.379286 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 28 13:20:24.381206 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 13:20:24.417607 containerd[1596]: time="2025-10-28T13:20:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 13:20:24.418022 containerd[1596]: time="2025-10-28T13:20:24.417949591Z" level=info msg="starting containerd" revision=cb1076646aa3740577fafbf3d914198b7fe8e3f7 version=v2.1.4 Oct 28 13:20:24.534034 containerd[1596]: time="2025-10-28T13:20:24.533931829Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.575µs" Oct 28 13:20:24.534034 containerd[1596]: time="2025-10-28T13:20:24.534021196Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 13:20:24.534175 containerd[1596]: time="2025-10-28T13:20:24.534112007Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 13:20:24.534175 containerd[1596]: time="2025-10-28T13:20:24.534131223Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 13:20:24.534394 containerd[1596]: time="2025-10-28T13:20:24.534369700Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 13:20:24.534443 containerd[1596]: time="2025-10-28T13:20:24.534393194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534505 containerd[1596]: time="2025-10-28T13:20:24.534477001Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534505 containerd[1596]: time="2025-10-28T13:20:24.534493803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534902 containerd[1596]: time="2025-10-28T13:20:24.534850031Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534902 containerd[1596]: time="2025-10-28T13:20:24.534890877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534941 containerd[1596]: time="2025-10-28T13:20:24.534902249Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 13:20:24.534941 containerd[1596]: time="2025-10-28T13:20:24.534910003Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535184 containerd[1596]: time="2025-10-28T13:20:24.535157858Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535184 containerd[1596]: time="2025-10-28T13:20:24.535174489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535342 containerd[1596]: time="2025-10-28T13:20:24.535318048Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535640 containerd[1596]: time="2025-10-28T13:20:24.535614795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535674 containerd[1596]: time="2025-10-28T13:20:24.535655511Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 13:20:24.535674 containerd[1596]: time="2025-10-28T13:20:24.535668806Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 13:20:24.535751 containerd[1596]: time="2025-10-28T13:20:24.535734950Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 13:20:24.536220 containerd[1596]: time="2025-10-28T13:20:24.536197287Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 13:20:24.536372 containerd[1596]: time="2025-10-28T13:20:24.536355494Z" level=info msg="metadata content store policy set" policy=shared Oct 28 13:20:24.544074 containerd[1596]: time="2025-10-28T13:20:24.544023995Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 13:20:24.544173 containerd[1596]: time="2025-10-28T13:20:24.544158027Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 28 13:20:24.544409 containerd[1596]: time="2025-10-28T13:20:24.544392346Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 13:20:24.544487 containerd[1596]: time="2025-10-28T13:20:24.544472627Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 13:20:24.544541 containerd[1596]: time="2025-10-28T13:20:24.544529042Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 13:20:24.544593 containerd[1596]: time="2025-10-28T13:20:24.544581090Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 13:20:24.544640 containerd[1596]: time="2025-10-28T13:20:24.544629070Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 13:20:24.544695 containerd[1596]: time="2025-10-28T13:20:24.544683682Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 13:20:24.544754 containerd[1596]: time="2025-10-28T13:20:24.544742252Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 13:20:24.544815 containerd[1596]: time="2025-10-28T13:20:24.544802676Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 13:20:24.544868 containerd[1596]: time="2025-10-28T13:20:24.544856276Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 13:20:24.544931 containerd[1596]: time="2025-10-28T13:20:24.544918733Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 13:20:24.544995 containerd[1596]: time="2025-10-28T13:20:24.544982573Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 13:20:24.545212 containerd[1596]: time="2025-10-28T13:20:24.545194320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 13:20:24.545285 containerd[1596]: time="2025-10-28T13:20:24.545271755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 13:20:24.545351 containerd[1596]: time="2025-10-28T13:20:24.545338180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 13:20:24.545400 containerd[1596]: time="2025-10-28T13:20:24.545388855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 13:20:24.545466 containerd[1596]: time="2025-10-28T13:20:24.545452334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 13:20:24.545516 containerd[1596]: time="2025-10-28T13:20:24.545504341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 13:20:24.545573 containerd[1596]: time="2025-10-28T13:20:24.545560747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 13:20:24.545629 containerd[1596]: time="2025-10-28T13:20:24.545616351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 13:20:24.545684 containerd[1596]: time="2025-10-28T13:20:24.545672577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 13:20:24.545734 containerd[1596]: time="2025-10-28T13:20:24.545722751Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 13:20:24.545783 containerd[1596]: time="2025-10-28T13:20:24.545771963Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 13:20:24.545857 containerd[1596]: time="2025-10-28T13:20:24.545844349Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 13:20:24.545992 containerd[1596]: time="2025-10-28T13:20:24.545976667Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 13:20:24.546086 containerd[1596]: time="2025-10-28T13:20:24.546070553Z" level=info msg="Start snapshots syncer" Oct 28 13:20:24.546153 containerd[1596]: time="2025-10-28T13:20:24.546140034Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 13:20:24.546553 containerd[1596]: time="2025-10-28T13:20:24.546510298Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.546753384Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.546884269Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547019864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547040011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547051874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547061532Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547079405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547089815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547102468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547114521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547140700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547187127Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547199841Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 13:20:24.548024 containerd[1596]: time="2025-10-28T13:20:24.547210651Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547220480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547228405Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547238193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547250897Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547289619Z" level=info msg="runtime interface created" Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547294980Z" level=info msg="created NRI interface" Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547303075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547314977Z" level=info msg="Connect containerd service" Oct 28 13:20:24.548476 containerd[1596]: time="2025-10-28T13:20:24.547344512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 13:20:24.548980 containerd[1596]: time="2025-10-28T13:20:24.548926389Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 13:20:24.672547 tar[1592]: linux-amd64/README.md Oct 28 13:20:24.693405 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.709981735Z" level=info msg="Start subscribing containerd event" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710091972Z" level=info msg="Start recovering state" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710261179Z" level=info msg="Start event monitor" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710276718Z" level=info msg="Start cni network conf syncer for default" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710762029Z" level=info msg="Start streaming server" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710775824Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710787276Z" level=info msg="runtime interface starting up..." Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710793177Z" level=info msg="starting plugins..." Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710814377Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 13:20:24.711040 containerd[1596]: time="2025-10-28T13:20:24.710972593Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 13:20:24.711335 containerd[1596]: time="2025-10-28T13:20:24.711061771Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 13:20:24.711335 containerd[1596]: time="2025-10-28T13:20:24.711191324Z" level=info msg="containerd successfully booted in 0.294502s" Oct 28 13:20:24.711372 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 13:20:25.013704 systemd-networkd[1501]: eth0: Gained IPv6LL Oct 28 13:20:25.023654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 13:20:25.028493 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 13:20:25.035194 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 13:20:25.040914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:25.046487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 13:20:25.090632 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 13:20:25.090931 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 13:20:25.093835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 13:20:25.094476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 13:20:25.735783 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 13:20:25.738865 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:53936.service - OpenSSH per-connection server daemon (10.0.0.1:53936). Oct 28 13:20:25.817383 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 53936 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:25.819616 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:25.826728 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 13:20:25.829712 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 13:20:25.838119 systemd-logind[1580]: New session 1 of user core. Oct 28 13:20:25.860095 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 13:20:25.865811 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 13:20:25.914091 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 13:20:25.916770 systemd-logind[1580]: New session c1 of user core. Oct 28 13:20:26.072511 systemd[1708]: Queued start job for default target default.target. Oct 28 13:20:26.181754 systemd[1708]: Created slice app.slice - User Application Slice. Oct 28 13:20:26.181787 systemd[1708]: Reached target paths.target - Paths. Oct 28 13:20:26.181833 systemd[1708]: Reached target timers.target - Timers. Oct 28 13:20:26.183883 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 13:20:26.197722 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 13:20:26.197867 systemd[1708]: Reached target sockets.target - Sockets. Oct 28 13:20:26.197909 systemd[1708]: Reached target basic.target - Basic System. Oct 28 13:20:26.197951 systemd[1708]: Reached target default.target - Main User Target. Oct 28 13:20:26.197985 systemd[1708]: Startup finished in 271ms. Oct 28 13:20:26.198413 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 13:20:26.202421 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 13:20:26.268042 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:41696.service - OpenSSH per-connection server daemon (10.0.0.1:41696). Oct 28 13:20:26.349758 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 41696 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:26.351344 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:26.355933 systemd-logind[1580]: New session 2 of user core. Oct 28 13:20:26.358248 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 13:20:26.370151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:26.372454 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 13:20:26.374376 systemd[1]: Startup finished in 2.788s (kernel) + 6.932s (initrd) + 5.631s (userspace) = 15.352s. Oct 28 13:20:26.374909 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:20:26.415354 sshd[1726]: Connection closed by 10.0.0.1 port 41696 Oct 28 13:20:26.416532 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:26.427345 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:41696.service: Deactivated successfully. Oct 28 13:20:26.429107 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 13:20:26.429810 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Oct 28 13:20:26.432498 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:41708.service - OpenSSH per-connection server daemon (10.0.0.1:41708). Oct 28 13:20:26.433033 systemd-logind[1580]: Removed session 2. Oct 28 13:20:26.485777 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 41708 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:26.487579 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:26.492583 systemd-logind[1580]: New session 3 of user core. Oct 28 13:20:26.508122 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 13:20:26.645281 sshd[1742]: Connection closed by 10.0.0.1 port 41708 Oct 28 13:20:26.646160 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:26.654674 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:41708.service: Deactivated successfully. Oct 28 13:20:26.656557 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 13:20:26.657436 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Oct 28 13:20:26.660803 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). Oct 28 13:20:26.661621 systemd-logind[1580]: Removed session 3. Oct 28 13:20:26.725724 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:26.727060 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:26.732596 systemd-logind[1580]: New session 4 of user core. Oct 28 13:20:26.740135 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 13:20:26.839578 sshd[1754]: Connection closed by 10.0.0.1 port 41716 Oct 28 13:20:26.841656 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:26.852608 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:41716.service: Deactivated successfully. Oct 28 13:20:26.854443 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 13:20:26.855813 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Oct 28 13:20:26.859644 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:41722.service - OpenSSH per-connection server daemon (10.0.0.1:41722). Oct 28 13:20:26.860419 systemd-logind[1580]: Removed session 4. Oct 28 13:20:26.915246 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 41722 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:26.916960 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:26.921406 systemd-logind[1580]: New session 5 of user core. Oct 28 13:20:26.934192 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 13:20:26.960047 kubelet[1728]: E1028 13:20:26.959958 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:20:26.964860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:20:26.965075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:20:26.965487 systemd[1]: kubelet.service: Consumed 1.743s CPU time, 256.8M memory peak. Oct 28 13:20:27.002334 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 13:20:27.002721 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:27.031291 sudo[1768]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:27.033277 sshd[1766]: Connection closed by 10.0.0.1 port 41722 Oct 28 13:20:27.033752 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:27.047884 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:41722.service: Deactivated successfully. Oct 28 13:20:27.050132 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 13:20:27.050918 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Oct 28 13:20:27.054397 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:41728.service - OpenSSH per-connection server daemon (10.0.0.1:41728). Oct 28 13:20:27.054948 systemd-logind[1580]: Removed session 5. Oct 28 13:20:27.116654 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 41728 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:27.118343 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:27.123173 systemd-logind[1580]: New session 6 of user core. Oct 28 13:20:27.130119 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 13:20:27.185978 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 13:20:27.186307 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:27.194312 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:27.202323 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 13:20:27.202638 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:27.214176 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 13:20:27.263137 augenrules[1801]: No rules Oct 28 13:20:27.264699 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 13:20:27.264979 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 13:20:27.266112 sudo[1778]: pam_unix(sudo:session): session closed for user root Oct 28 13:20:27.267932 sshd[1777]: Connection closed by 10.0.0.1 port 41728 Oct 28 13:20:27.268405 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 28 13:20:27.282085 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:41728.service: Deactivated successfully. Oct 28 13:20:27.284516 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 13:20:27.285483 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Oct 28 13:20:27.287626 systemd-logind[1580]: Removed session 6. Oct 28 13:20:27.289674 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:41738.service - OpenSSH per-connection server daemon (10.0.0.1:41738). Oct 28 13:20:27.356673 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 41738 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:20:27.358578 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:20:27.364284 systemd-logind[1580]: New session 7 of user core. Oct 28 13:20:27.374169 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 13:20:27.432771 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 13:20:27.433236 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 13:20:28.192881 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 13:20:28.210350 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 13:20:28.810318 dockerd[1835]: time="2025-10-28T13:20:28.810206510Z" level=info msg="Starting up" Oct 28 13:20:28.811222 dockerd[1835]: time="2025-10-28T13:20:28.811164938Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 13:20:28.832507 dockerd[1835]: time="2025-10-28T13:20:28.832449885Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 13:20:29.323135 dockerd[1835]: time="2025-10-28T13:20:29.323077567Z" level=info msg="Loading containers: start." Oct 28 13:20:29.336038 kernel: Initializing XFRM netlink socket Oct 28 13:20:29.614079 systemd-networkd[1501]: docker0: Link UP Oct 28 13:20:29.620673 dockerd[1835]: time="2025-10-28T13:20:29.620637668Z" level=info msg="Loading containers: done." Oct 28 13:20:29.639165 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3372293856-merged.mount: Deactivated successfully. Oct 28 13:20:29.642186 dockerd[1835]: time="2025-10-28T13:20:29.642144812Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 13:20:29.642260 dockerd[1835]: time="2025-10-28T13:20:29.642242164Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 13:20:29.642364 dockerd[1835]: time="2025-10-28T13:20:29.642344726Z" level=info msg="Initializing buildkit" Oct 28 13:20:29.673582 dockerd[1835]: time="2025-10-28T13:20:29.673535821Z" level=info msg="Completed buildkit initialization" Oct 28 13:20:29.680270 dockerd[1835]: time="2025-10-28T13:20:29.680228913Z" level=info msg="Daemon has completed initialization" Oct 28 13:20:29.680391 dockerd[1835]: time="2025-10-28T13:20:29.680336044Z" level=info msg="API listen on /run/docker.sock" Oct 28 13:20:29.680492 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 13:20:30.665701 containerd[1596]: time="2025-10-28T13:20:30.665607850Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 28 13:20:31.534632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054872159.mount: Deactivated successfully. Oct 28 13:20:32.836607 containerd[1596]: time="2025-10-28T13:20:32.836522580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:32.837493 containerd[1596]: time="2025-10-28T13:20:32.837432577Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393931" Oct 28 13:20:32.838875 containerd[1596]: time="2025-10-28T13:20:32.838805141Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:32.841547 containerd[1596]: time="2025-10-28T13:20:32.841518419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:32.842390 containerd[1596]: time="2025-10-28T13:20:32.842348856Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.17664671s" Oct 28 13:20:32.842390 containerd[1596]: time="2025-10-28T13:20:32.842387329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 28 13:20:32.843100 containerd[1596]: time="2025-10-28T13:20:32.843059820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 28 13:20:33.934719 containerd[1596]: time="2025-10-28T13:20:33.934635594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:33.935678 containerd[1596]: time="2025-10-28T13:20:33.935599902Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=0" Oct 28 13:20:33.937139 containerd[1596]: time="2025-10-28T13:20:33.937068086Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:33.940383 containerd[1596]: time="2025-10-28T13:20:33.940346294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:33.941241 containerd[1596]: time="2025-10-28T13:20:33.941215283Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.098117522s" Oct 28 13:20:33.941310 containerd[1596]: time="2025-10-28T13:20:33.941245490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 28 13:20:33.942031 containerd[1596]: time="2025-10-28T13:20:33.941840927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 28 13:20:34.915770 containerd[1596]: time="2025-10-28T13:20:34.915673340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:34.916876 containerd[1596]: time="2025-10-28T13:20:34.916794342Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Oct 28 13:20:34.918228 containerd[1596]: time="2025-10-28T13:20:34.918170673Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:34.920965 containerd[1596]: time="2025-10-28T13:20:34.920918577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:34.922100 containerd[1596]: time="2025-10-28T13:20:34.922059717Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 980.183474ms" Oct 28 13:20:34.922100 containerd[1596]: time="2025-10-28T13:20:34.922088931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 28 13:20:34.922836 containerd[1596]: time="2025-10-28T13:20:34.922598627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 28 13:20:36.386035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015312189.mount: Deactivated successfully. Oct 28 13:20:36.618720 containerd[1596]: time="2025-10-28T13:20:36.618638628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:36.619499 containerd[1596]: time="2025-10-28T13:20:36.619438698Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25960977" Oct 28 13:20:36.620574 containerd[1596]: time="2025-10-28T13:20:36.620538190Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:36.622530 containerd[1596]: time="2025-10-28T13:20:36.622492956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:36.622946 containerd[1596]: time="2025-10-28T13:20:36.622890902Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.7002573s" Oct 28 13:20:36.622988 containerd[1596]: time="2025-10-28T13:20:36.622944072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 28 13:20:36.623544 containerd[1596]: time="2025-10-28T13:20:36.623519361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 28 13:20:37.218325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 13:20:37.224489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:37.227983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382765003.mount: Deactivated successfully. Oct 28 13:20:37.823140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:37.837370 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 13:20:38.048114 kubelet[2152]: E1028 13:20:38.047993 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 13:20:38.055168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 13:20:38.055549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 13:20:38.056096 systemd[1]: kubelet.service: Consumed 510ms CPU time, 110.4M memory peak. Oct 28 13:20:38.583503 containerd[1596]: time="2025-10-28T13:20:38.583426476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:38.584404 containerd[1596]: time="2025-10-28T13:20:38.584353385Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22156298" Oct 28 13:20:38.585838 containerd[1596]: time="2025-10-28T13:20:38.585800939Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:38.588581 containerd[1596]: time="2025-10-28T13:20:38.588536109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:38.589668 containerd[1596]: time="2025-10-28T13:20:38.589605324Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.966052079s" Oct 28 13:20:38.589668 containerd[1596]: time="2025-10-28T13:20:38.589663252Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 28 13:20:38.590233 containerd[1596]: time="2025-10-28T13:20:38.590197905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 28 13:20:39.129425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902482037.mount: Deactivated successfully. Oct 28 13:20:39.136171 containerd[1596]: time="2025-10-28T13:20:39.136107601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:39.137198 containerd[1596]: time="2025-10-28T13:20:39.137113387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Oct 28 13:20:39.138285 containerd[1596]: time="2025-10-28T13:20:39.138258244Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:39.140490 containerd[1596]: time="2025-10-28T13:20:39.140457538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:39.141165 containerd[1596]: time="2025-10-28T13:20:39.141140880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 550.914672ms" Oct 28 13:20:39.141212 containerd[1596]: time="2025-10-28T13:20:39.141167309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 28 13:20:39.141774 containerd[1596]: time="2025-10-28T13:20:39.141569343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 28 13:20:42.457862 containerd[1596]: time="2025-10-28T13:20:42.457793227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:42.458998 containerd[1596]: time="2025-10-28T13:20:42.458937904Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73504964" Oct 28 13:20:42.460424 containerd[1596]: time="2025-10-28T13:20:42.460365892Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:42.463589 containerd[1596]: time="2025-10-28T13:20:42.463551726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:20:42.464821 containerd[1596]: time="2025-10-28T13:20:42.464774309Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.323178667s" Oct 28 13:20:42.464821 containerd[1596]: time="2025-10-28T13:20:42.464811078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 28 13:20:46.481718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:46.481898 systemd[1]: kubelet.service: Consumed 510ms CPU time, 110.4M memory peak. Oct 28 13:20:46.485186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:46.517833 systemd[1]: Reload requested from client PID 2278 ('systemctl') (unit session-7.scope)... Oct 28 13:20:46.517851 systemd[1]: Reloading... Oct 28 13:20:46.629117 zram_generator::config[2324]: No configuration found. Oct 28 13:20:47.056500 systemd[1]: Reloading finished in 538 ms. Oct 28 13:20:47.118770 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 13:20:47.118881 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 13:20:47.119238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:47.119282 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.3M memory peak. Oct 28 13:20:47.120909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:47.318263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:47.337426 (kubelet)[2369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:20:47.378102 kubelet[2369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:20:47.378102 kubelet[2369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:20:47.378642 kubelet[2369]: I1028 13:20:47.378165 2369 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:20:47.941689 kubelet[2369]: I1028 13:20:47.941642 2369 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 13:20:47.941689 kubelet[2369]: I1028 13:20:47.941678 2369 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:20:47.943887 kubelet[2369]: I1028 13:20:47.943865 2369 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 13:20:47.943887 kubelet[2369]: I1028 13:20:47.943880 2369 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:20:47.944144 kubelet[2369]: I1028 13:20:47.944127 2369 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 13:20:48.079296 kubelet[2369]: E1028 13:20:48.079252 2369 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 13:20:48.079460 kubelet[2369]: I1028 13:20:48.079400 2369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:20:48.085582 kubelet[2369]: I1028 13:20:48.085556 2369 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:20:48.090781 kubelet[2369]: I1028 13:20:48.090736 2369 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 13:20:48.091071 kubelet[2369]: I1028 13:20:48.091021 2369 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:20:48.091216 kubelet[2369]: I1028 13:20:48.091047 2369 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:20:48.091351 kubelet[2369]: I1028 13:20:48.091223 2369 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:20:48.091351 kubelet[2369]: I1028 13:20:48.091233 2369 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 13:20:48.091399 kubelet[2369]: I1028 13:20:48.091354 2369 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 13:20:48.093674 kubelet[2369]: I1028 13:20:48.093652 2369 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:20:48.093898 kubelet[2369]: I1028 13:20:48.093870 2369 kubelet.go:475] "Attempting to sync node with API server" Oct 28 13:20:48.093898 kubelet[2369]: I1028 13:20:48.093890 2369 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:20:48.093958 kubelet[2369]: I1028 13:20:48.093926 2369 kubelet.go:387] "Adding apiserver pod source" Oct 28 13:20:48.093989 kubelet[2369]: I1028 13:20:48.093963 2369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:20:48.094899 kubelet[2369]: E1028 13:20:48.094846 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 13:20:48.094899 kubelet[2369]: E1028 13:20:48.094849 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 13:20:48.097399 kubelet[2369]: I1028 13:20:48.097359 2369 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:20:48.097876 kubelet[2369]: I1028 13:20:48.097850 2369 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 13:20:48.097876 kubelet[2369]: I1028 13:20:48.097878 2369 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 13:20:48.097995 kubelet[2369]: W1028 13:20:48.097943 2369 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 13:20:48.101784 kubelet[2369]: I1028 13:20:48.101744 2369 server.go:1262] "Started kubelet" Oct 28 13:20:48.101991 kubelet[2369]: I1028 13:20:48.101919 2369 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:20:48.102126 kubelet[2369]: I1028 13:20:48.102102 2369 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 13:20:48.102930 kubelet[2369]: I1028 13:20:48.102814 2369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:20:48.103901 kubelet[2369]: I1028 13:20:48.103680 2369 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:20:48.103901 kubelet[2369]: I1028 13:20:48.103753 2369 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:20:48.106313 kubelet[2369]: E1028 13:20:48.105180 2369 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.148:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.148:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872aa4c1af3e0c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 13:20:48.101712069 +0000 UTC m=+0.759439791,LastTimestamp:2025-10-28 13:20:48.101712069 +0000 UTC m=+0.759439791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 13:20:48.106641 kubelet[2369]: I1028 13:20:48.106624 2369 server.go:310] "Adding debug handlers to kubelet server" Oct 28 13:20:48.106823 kubelet[2369]: I1028 13:20:48.106796 2369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:20:48.109895 kubelet[2369]: I1028 13:20:48.109849 2369 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 13:20:48.110404 kubelet[2369]: E1028 13:20:48.110115 2369 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:20:48.110404 kubelet[2369]: I1028 13:20:48.110252 2369 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 13:20:48.110404 kubelet[2369]: I1028 13:20:48.110318 2369 reconciler.go:29] "Reconciler: start to sync state" Oct 28 13:20:48.110814 kubelet[2369]: E1028 13:20:48.110769 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 13:20:48.110897 kubelet[2369]: E1028 13:20:48.110847 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="200ms" Oct 28 13:20:48.111246 kubelet[2369]: I1028 13:20:48.111203 2369 factory.go:223] Registration of the systemd container factory successfully Oct 28 13:20:48.111392 kubelet[2369]: I1028 13:20:48.111294 2369 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:20:48.113060 kubelet[2369]: I1028 13:20:48.112621 2369 factory.go:223] Registration of the containerd container factory successfully Oct 28 13:20:48.117913 kubelet[2369]: E1028 13:20:48.117859 2369 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:20:48.130803 kubelet[2369]: I1028 13:20:48.130741 2369 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 13:20:48.131134 kubelet[2369]: I1028 13:20:48.131112 2369 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:20:48.132097 kubelet[2369]: I1028 13:20:48.132081 2369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:20:48.132230 kubelet[2369]: I1028 13:20:48.132208 2369 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:20:48.132695 kubelet[2369]: I1028 13:20:48.132207 2369 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 13:20:48.132695 kubelet[2369]: I1028 13:20:48.132697 2369 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 13:20:48.132802 kubelet[2369]: I1028 13:20:48.132779 2369 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 13:20:48.132877 kubelet[2369]: E1028 13:20:48.132851 2369 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:20:48.133403 kubelet[2369]: E1028 13:20:48.133362 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 13:20:48.134940 kubelet[2369]: I1028 13:20:48.134923 2369 policy_none.go:49] "None policy: Start" Oct 28 13:20:48.134992 kubelet[2369]: I1028 13:20:48.134946 2369 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 13:20:48.134992 kubelet[2369]: I1028 13:20:48.134957 2369 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 13:20:48.137205 kubelet[2369]: I1028 13:20:48.137186 2369 policy_none.go:47] "Start" Oct 28 13:20:48.141688 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 13:20:48.160542 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 13:20:48.185817 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 13:20:48.187077 kubelet[2369]: E1028 13:20:48.187032 2369 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 13:20:48.187372 kubelet[2369]: I1028 13:20:48.187243 2369 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:20:48.187372 kubelet[2369]: I1028 13:20:48.187260 2369 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:20:48.187735 kubelet[2369]: I1028 13:20:48.187650 2369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:20:48.188680 kubelet[2369]: E1028 13:20:48.188653 2369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:20:48.188745 kubelet[2369]: E1028 13:20:48.188707 2369 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 13:20:48.245601 systemd[1]: Created slice kubepods-burstable-pod3f7612e19bd2801b8f5e428d9dfe74d1.slice - libcontainer container kubepods-burstable-pod3f7612e19bd2801b8f5e428d9dfe74d1.slice. Oct 28 13:20:48.271662 kubelet[2369]: E1028 13:20:48.271605 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:48.274958 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 28 13:20:48.289201 kubelet[2369]: I1028 13:20:48.289127 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:20:48.289550 kubelet[2369]: E1028 13:20:48.289512 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Oct 28 13:20:48.290423 kubelet[2369]: E1028 13:20:48.290394 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:48.292446 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 28 13:20:48.294274 kubelet[2369]: E1028 13:20:48.294240 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:48.310704 kubelet[2369]: I1028 13:20:48.310661 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:48.310771 kubelet[2369]: I1028 13:20:48.310715 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:48.310771 kubelet[2369]: I1028 13:20:48.310745 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:48.310818 kubelet[2369]: I1028 13:20:48.310771 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:48.310818 kubelet[2369]: I1028 13:20:48.310798 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:48.310875 kubelet[2369]: I1028 13:20:48.310822 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:48.310902 kubelet[2369]: I1028 13:20:48.310873 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:48.310902 kubelet[2369]: I1028 13:20:48.310889 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:48.310941 kubelet[2369]: I1028 13:20:48.310918 2369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:48.311241 kubelet[2369]: E1028 13:20:48.311214 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="400ms" Oct 28 13:20:48.491760 kubelet[2369]: I1028 13:20:48.491708 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:20:48.492310 kubelet[2369]: E1028 13:20:48.492263 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Oct 28 13:20:48.576103 kubelet[2369]: E1028 13:20:48.575994 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:48.576835 containerd[1596]: time="2025-10-28T13:20:48.576785281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f7612e19bd2801b8f5e428d9dfe74d1,Namespace:kube-system,Attempt:0,}" Oct 28 13:20:48.593750 kubelet[2369]: E1028 13:20:48.593715 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:48.594172 containerd[1596]: time="2025-10-28T13:20:48.594141831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 28 13:20:48.598746 kubelet[2369]: E1028 13:20:48.598719 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:48.599261 containerd[1596]: time="2025-10-28T13:20:48.599215786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 28 13:20:48.712675 kubelet[2369]: E1028 13:20:48.712603 2369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.148:6443: connect: connection refused" interval="800ms" Oct 28 13:20:48.894219 kubelet[2369]: I1028 13:20:48.894121 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:20:48.894641 kubelet[2369]: E1028 13:20:48.894601 2369 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.148:6443/api/v1/nodes\": dial tcp 10.0.0.148:6443: connect: connection refused" node="localhost" Oct 28 13:20:49.021199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147708196.mount: Deactivated successfully. Oct 28 13:20:49.027349 containerd[1596]: time="2025-10-28T13:20:49.027288956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:20:49.029182 containerd[1596]: time="2025-10-28T13:20:49.029121513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 13:20:49.032432 containerd[1596]: time="2025-10-28T13:20:49.032369665Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:20:49.033557 containerd[1596]: time="2025-10-28T13:20:49.033520813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:20:49.035607 containerd[1596]: time="2025-10-28T13:20:49.035359181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 13:20:49.036405 containerd[1596]: time="2025-10-28T13:20:49.036350600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:20:49.037465 containerd[1596]: time="2025-10-28T13:20:49.037421358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 13:20:49.038178 containerd[1596]: time="2025-10-28T13:20:49.038144094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.408145ms" Oct 28 13:20:49.038362 containerd[1596]: time="2025-10-28T13:20:49.038328229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 13:20:49.040689 containerd[1596]: time="2025-10-28T13:20:49.040659501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 439.034967ms" Oct 28 13:20:49.043110 containerd[1596]: time="2025-10-28T13:20:49.043082615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.820618ms" Oct 28 13:20:49.072739 containerd[1596]: time="2025-10-28T13:20:49.072696482Z" level=info msg="connecting to shim 527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7" address="unix:///run/containerd/s/fc134d8313d39ee49d0cf62c152dbfca5fab6a957f9a887e7bd18d1c7d1aac21" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:20:49.077937 containerd[1596]: time="2025-10-28T13:20:49.077847432Z" level=info msg="connecting to shim abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f" address="unix:///run/containerd/s/1730b4dbfb8f2867868bc54ba32c28a6ade4f1161313717f69d3f4f0ff13c099" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:20:49.104630 containerd[1596]: time="2025-10-28T13:20:49.104281084Z" level=info msg="connecting to shim cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf" address="unix:///run/containerd/s/8e00c205a786f46e8e260d0c77cdfe8f4a564d1e2678113677bff5e8dd3d399b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:20:49.122171 systemd[1]: Started cri-containerd-527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7.scope - libcontainer container 527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7. Oct 28 13:20:49.123930 systemd[1]: Started cri-containerd-abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f.scope - libcontainer container abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f. Oct 28 13:20:49.146147 systemd[1]: Started cri-containerd-cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf.scope - libcontainer container cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf. Oct 28 13:20:49.197453 containerd[1596]: time="2025-10-28T13:20:49.197396138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f7612e19bd2801b8f5e428d9dfe74d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f\"" Oct 28 13:20:49.198759 kubelet[2369]: E1028 13:20:49.198735 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:49.202491 containerd[1596]: time="2025-10-28T13:20:49.202433595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7\"" Oct 28 13:20:49.203489 kubelet[2369]: E1028 13:20:49.203443 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:49.206192 containerd[1596]: time="2025-10-28T13:20:49.206158020Z" level=info msg="CreateContainer within sandbox \"abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 13:20:49.206376 containerd[1596]: time="2025-10-28T13:20:49.206345521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf\"" Oct 28 13:20:49.207083 kubelet[2369]: E1028 13:20:49.207050 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:49.207678 containerd[1596]: time="2025-10-28T13:20:49.207648204Z" level=info msg="CreateContainer within sandbox \"527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 13:20:49.211512 containerd[1596]: time="2025-10-28T13:20:49.211448932Z" level=info msg="CreateContainer within sandbox \"cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 13:20:49.217427 kubelet[2369]: E1028 13:20:49.217391 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.148:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 13:20:49.223605 containerd[1596]: time="2025-10-28T13:20:49.223564273Z" level=info msg="Container 64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:20:49.226567 containerd[1596]: time="2025-10-28T13:20:49.226487024Z" level=info msg="Container 863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:20:49.232319 containerd[1596]: time="2025-10-28T13:20:49.232276561Z" level=info msg="Container da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:20:49.239773 containerd[1596]: time="2025-10-28T13:20:49.239710062Z" level=info msg="CreateContainer within sandbox \"527394bdd02f1c8764f0e6cb6979d0f0d0eca62752454da77532930c530870a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0\"" Oct 28 13:20:49.240550 containerd[1596]: time="2025-10-28T13:20:49.240517426Z" level=info msg="StartContainer for \"863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0\"" Oct 28 13:20:49.241885 containerd[1596]: time="2025-10-28T13:20:49.241821081Z" level=info msg="connecting to shim 863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0" address="unix:///run/containerd/s/fc134d8313d39ee49d0cf62c152dbfca5fab6a957f9a887e7bd18d1c7d1aac21" protocol=ttrpc version=3 Oct 28 13:20:49.246369 containerd[1596]: time="2025-10-28T13:20:49.246316942Z" level=info msg="CreateContainer within sandbox \"abcad7b2a6d622ef1311705df36d774bfbdaa2ac8374180be44d104385d7c71f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71\"" Oct 28 13:20:49.247019 containerd[1596]: time="2025-10-28T13:20:49.246970989Z" level=info msg="StartContainer for \"64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71\"" Oct 28 13:20:49.248024 containerd[1596]: time="2025-10-28T13:20:49.247966566Z" level=info msg="CreateContainer within sandbox \"cdbfefeb15abe6f7727163a026219d51c639eb3d68fe871fe9d57d72a6e96abf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10\"" Oct 28 13:20:49.248334 containerd[1596]: time="2025-10-28T13:20:49.248288339Z" level=info msg="connecting to shim 64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71" address="unix:///run/containerd/s/1730b4dbfb8f2867868bc54ba32c28a6ade4f1161313717f69d3f4f0ff13c099" protocol=ttrpc version=3 Oct 28 13:20:49.249019 containerd[1596]: time="2025-10-28T13:20:49.248978303Z" level=info msg="StartContainer for \"da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10\"" Oct 28 13:20:49.250183 containerd[1596]: time="2025-10-28T13:20:49.250156723Z" level=info msg="connecting to shim da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10" address="unix:///run/containerd/s/8e00c205a786f46e8e260d0c77cdfe8f4a564d1e2678113677bff5e8dd3d399b" protocol=ttrpc version=3 Oct 28 13:20:49.265214 systemd[1]: Started cri-containerd-863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0.scope - libcontainer container 863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0. Oct 28 13:20:49.274312 systemd[1]: Started cri-containerd-64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71.scope - libcontainer container 64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71. Oct 28 13:20:49.275781 systemd[1]: Started cri-containerd-da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10.scope - libcontainer container da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10. Oct 28 13:20:49.336483 containerd[1596]: time="2025-10-28T13:20:49.336420959Z" level=info msg="StartContainer for \"863e0ffdfec7698f048c99be842f8877ca0486fcee6036b3bc7be27d4ed521e0\" returns successfully" Oct 28 13:20:49.337583 containerd[1596]: time="2025-10-28T13:20:49.337539427Z" level=info msg="StartContainer for \"da0c99011c2c8437400b3d5db9ce40634cba411133a07cc84a63bb853c6fbe10\" returns successfully" Oct 28 13:20:49.340028 containerd[1596]: time="2025-10-28T13:20:49.339374107Z" level=info msg="StartContainer for \"64abec03dd0cb7b8e642deaccfd8957e6324b87f30fbcd5efe4f4572dffa1c71\" returns successfully" Oct 28 13:20:49.344195 kubelet[2369]: E1028 13:20:49.343228 2369 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.148:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 13:20:49.697555 kubelet[2369]: I1028 13:20:49.697498 2369 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:20:50.142521 kubelet[2369]: E1028 13:20:50.142369 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:50.142703 kubelet[2369]: E1028 13:20:50.142679 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:50.144136 kubelet[2369]: E1028 13:20:50.144113 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:50.144240 kubelet[2369]: E1028 13:20:50.144218 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:50.148630 kubelet[2369]: E1028 13:20:50.148604 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:50.148748 kubelet[2369]: E1028 13:20:50.148727 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:51.150512 kubelet[2369]: E1028 13:20:51.150468 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:51.150926 kubelet[2369]: E1028 13:20:51.150628 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:51.151179 kubelet[2369]: E1028 13:20:51.151161 2369 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 13:20:51.151273 kubelet[2369]: E1028 13:20:51.151249 2369 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:51.642253 kubelet[2369]: E1028 13:20:51.642109 2369 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 13:20:51.719269 kubelet[2369]: I1028 13:20:51.719215 2369 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:20:51.815510 kubelet[2369]: I1028 13:20:51.815450 2369 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:51.820131 kubelet[2369]: E1028 13:20:51.820078 2369 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:51.820131 kubelet[2369]: I1028 13:20:51.820104 2369 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:51.821443 kubelet[2369]: E1028 13:20:51.821406 2369 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:51.821443 kubelet[2369]: I1028 13:20:51.821443 2369 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:51.824896 kubelet[2369]: E1028 13:20:51.824863 2369 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:52.096398 kubelet[2369]: I1028 13:20:52.096222 2369 apiserver.go:52] "Watching apiserver" Oct 28 13:20:52.110749 kubelet[2369]: I1028 13:20:52.110697 2369 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 13:20:53.769063 systemd[1]: Reload requested from client PID 2657 ('systemctl') (unit session-7.scope)... Oct 28 13:20:53.769083 systemd[1]: Reloading... Oct 28 13:20:53.858048 zram_generator::config[2704]: No configuration found. Oct 28 13:20:54.093855 systemd[1]: Reloading finished in 324 ms. Oct 28 13:20:54.125508 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:54.154664 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 13:20:54.155039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:54.155104 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 127.1M memory peak. Oct 28 13:20:54.157407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 13:20:54.393680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 13:20:54.411278 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 13:20:54.703894 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 13:20:54.703894 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 13:20:54.703894 kubelet[2746]: I1028 13:20:54.703817 2746 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 13:20:54.712633 kubelet[2746]: I1028 13:20:54.712579 2746 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 13:20:54.712633 kubelet[2746]: I1028 13:20:54.712609 2746 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 13:20:54.712801 kubelet[2746]: I1028 13:20:54.712654 2746 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 13:20:54.712801 kubelet[2746]: I1028 13:20:54.712663 2746 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 13:20:54.712920 kubelet[2746]: I1028 13:20:54.712896 2746 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 13:20:54.714089 kubelet[2746]: I1028 13:20:54.714064 2746 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 13:20:54.716203 kubelet[2746]: I1028 13:20:54.716180 2746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 13:20:54.720109 kubelet[2746]: I1028 13:20:54.720085 2746 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 13:20:54.726415 kubelet[2746]: I1028 13:20:54.726382 2746 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 13:20:54.726657 kubelet[2746]: I1028 13:20:54.726630 2746 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 13:20:54.726819 kubelet[2746]: I1028 13:20:54.726655 2746 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 13:20:54.726896 kubelet[2746]: I1028 13:20:54.726831 2746 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 13:20:54.726896 kubelet[2746]: I1028 13:20:54.726841 2746 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 13:20:54.726896 kubelet[2746]: I1028 13:20:54.726864 2746 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 13:20:54.727895 kubelet[2746]: I1028 13:20:54.727869 2746 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:20:54.728109 kubelet[2746]: I1028 13:20:54.728090 2746 kubelet.go:475] "Attempting to sync node with API server" Oct 28 13:20:54.728143 kubelet[2746]: I1028 13:20:54.728127 2746 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 13:20:54.728165 kubelet[2746]: I1028 13:20:54.728154 2746 kubelet.go:387] "Adding apiserver pod source" Oct 28 13:20:54.728192 kubelet[2746]: I1028 13:20:54.728177 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 13:20:54.731219 kubelet[2746]: I1028 13:20:54.730604 2746 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 28 13:20:54.732035 kubelet[2746]: I1028 13:20:54.731988 2746 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 13:20:54.732093 kubelet[2746]: I1028 13:20:54.732042 2746 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 13:20:54.736346 kubelet[2746]: I1028 13:20:54.736313 2746 server.go:1262] "Started kubelet" Oct 28 13:20:54.736782 kubelet[2746]: I1028 13:20:54.736745 2746 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 13:20:54.736968 kubelet[2746]: I1028 13:20:54.736927 2746 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 13:20:54.737023 kubelet[2746]: I1028 13:20:54.736975 2746 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 13:20:54.737284 kubelet[2746]: I1028 13:20:54.737252 2746 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 13:20:54.739475 kubelet[2746]: I1028 13:20:54.739237 2746 server.go:310] "Adding debug handlers to kubelet server" Oct 28 13:20:54.740465 kubelet[2746]: I1028 13:20:54.740433 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 13:20:54.741142 kubelet[2746]: E1028 13:20:54.741099 2746 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 13:20:54.741971 kubelet[2746]: I1028 13:20:54.741916 2746 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 13:20:54.744234 kubelet[2746]: E1028 13:20:54.744214 2746 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 13:20:54.744465 kubelet[2746]: I1028 13:20:54.744453 2746 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 13:20:54.744805 kubelet[2746]: I1028 13:20:54.744791 2746 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 13:20:54.745121 kubelet[2746]: I1028 13:20:54.745084 2746 reconciler.go:29] "Reconciler: start to sync state" Oct 28 13:20:54.745986 kubelet[2746]: I1028 13:20:54.745966 2746 factory.go:223] Registration of the systemd container factory successfully Oct 28 13:20:54.746202 kubelet[2746]: I1028 13:20:54.746179 2746 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 13:20:54.748509 kubelet[2746]: I1028 13:20:54.748478 2746 factory.go:223] Registration of the containerd container factory successfully Oct 28 13:20:54.757848 kubelet[2746]: I1028 13:20:54.757792 2746 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 13:20:54.759674 kubelet[2746]: I1028 13:20:54.759299 2746 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 13:20:54.759674 kubelet[2746]: I1028 13:20:54.759324 2746 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 13:20:54.759674 kubelet[2746]: I1028 13:20:54.759349 2746 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 13:20:54.759674 kubelet[2746]: E1028 13:20:54.759396 2746 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796696 2746 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796722 2746 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796743 2746 state_mem.go:36] "Initialized new in-memory state store" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796874 2746 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796884 2746 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796902 2746 policy_none.go:49] "None policy: Start" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796911 2746 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 13:20:54.797059 kubelet[2746]: I1028 13:20:54.796921 2746 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 13:20:54.797359 kubelet[2746]: I1028 13:20:54.797234 2746 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 28 13:20:54.797359 kubelet[2746]: I1028 13:20:54.797245 2746 policy_none.go:47] "Start" Oct 28 13:20:54.802687 kubelet[2746]: E1028 13:20:54.802121 2746 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 13:20:54.802687 kubelet[2746]: I1028 13:20:54.802327 2746 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 13:20:54.802687 kubelet[2746]: I1028 13:20:54.802338 2746 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 13:20:54.802687 kubelet[2746]: I1028 13:20:54.802538 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 13:20:54.803952 kubelet[2746]: E1028 13:20:54.803487 2746 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 13:20:54.861055 kubelet[2746]: I1028 13:20:54.860444 2746 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:54.861055 kubelet[2746]: I1028 13:20:54.860554 2746 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:54.861055 kubelet[2746]: I1028 13:20:54.860758 2746 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.906854 kubelet[2746]: I1028 13:20:54.906804 2746 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 13:20:54.915505 kubelet[2746]: I1028 13:20:54.915473 2746 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 13:20:54.915624 kubelet[2746]: I1028 13:20:54.915569 2746 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 13:20:54.946463 kubelet[2746]: I1028 13:20:54.946429 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.946463 kubelet[2746]: I1028 13:20:54.946459 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.946613 kubelet[2746]: I1028 13:20:54.946487 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:54.946613 kubelet[2746]: I1028 13:20:54.946504 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.946613 kubelet[2746]: I1028 13:20:54.946519 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.946613 kubelet[2746]: I1028 13:20:54.946534 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 13:20:54.946613 kubelet[2746]: I1028 13:20:54.946549 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:54.946737 kubelet[2746]: I1028 13:20:54.946564 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:54.946737 kubelet[2746]: I1028 13:20:54.946580 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7612e19bd2801b8f5e428d9dfe74d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f7612e19bd2801b8f5e428d9dfe74d1\") " pod="kube-system/kube-apiserver-localhost" Oct 28 13:20:55.168256 kubelet[2746]: E1028 13:20:55.168194 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.168742 kubelet[2746]: E1028 13:20:55.168704 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.168968 kubelet[2746]: E1028 13:20:55.168935 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.729455 kubelet[2746]: I1028 13:20:55.729402 2746 apiserver.go:52] "Watching apiserver" Oct 28 13:20:55.745255 kubelet[2746]: I1028 13:20:55.745212 2746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 13:20:55.777806 kubelet[2746]: I1028 13:20:55.777768 2746 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:55.778499 kubelet[2746]: E1028 13:20:55.778016 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.778499 kubelet[2746]: E1028 13:20:55.778362 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.783098 kubelet[2746]: E1028 13:20:55.783055 2746 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 13:20:55.783818 kubelet[2746]: E1028 13:20:55.783769 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:55.807443 kubelet[2746]: I1028 13:20:55.807381 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.80736549 podStartE2EDuration="1.80736549s" podCreationTimestamp="2025-10-28 13:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:20:55.800756436 +0000 UTC m=+1.385359645" watchObservedRunningTime="2025-10-28 13:20:55.80736549 +0000 UTC m=+1.391968699" Oct 28 13:20:55.813619 kubelet[2746]: I1028 13:20:55.813567 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.813546512 podStartE2EDuration="1.813546512s" podCreationTimestamp="2025-10-28 13:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:20:55.807650895 +0000 UTC m=+1.392254104" watchObservedRunningTime="2025-10-28 13:20:55.813546512 +0000 UTC m=+1.398149721" Oct 28 13:20:55.821557 kubelet[2746]: I1028 13:20:55.821493 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8214737479999998 podStartE2EDuration="1.821473748s" podCreationTimestamp="2025-10-28 13:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:20:55.813701062 +0000 UTC m=+1.398304271" watchObservedRunningTime="2025-10-28 13:20:55.821473748 +0000 UTC m=+1.406076957" Oct 28 13:20:56.779432 kubelet[2746]: E1028 13:20:56.779394 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:56.779959 kubelet[2746]: E1028 13:20:56.779565 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:59.600623 kubelet[2746]: E1028 13:20:59.600568 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:20:59.784771 kubelet[2746]: E1028 13:20:59.784730 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:00.551360 kubelet[2746]: I1028 13:21:00.551322 2746 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 13:21:00.551779 containerd[1596]: time="2025-10-28T13:21:00.551717781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 13:21:00.552257 kubelet[2746]: I1028 13:21:00.551973 2746 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 13:21:00.786115 kubelet[2746]: E1028 13:21:00.786077 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:01.641839 systemd[1]: Created slice kubepods-besteffort-podaadb429e_f1ff_49f0_8203_dc04e1300d49.slice - libcontainer container kubepods-besteffort-podaadb429e_f1ff_49f0_8203_dc04e1300d49.slice. Oct 28 13:21:01.689925 kubelet[2746]: I1028 13:21:01.689878 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aadb429e-f1ff-49f0-8203-dc04e1300d49-xtables-lock\") pod \"kube-proxy-lv87d\" (UID: \"aadb429e-f1ff-49f0-8203-dc04e1300d49\") " pod="kube-system/kube-proxy-lv87d" Oct 28 13:21:01.689925 kubelet[2746]: I1028 13:21:01.689910 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aadb429e-f1ff-49f0-8203-dc04e1300d49-lib-modules\") pod \"kube-proxy-lv87d\" (UID: \"aadb429e-f1ff-49f0-8203-dc04e1300d49\") " pod="kube-system/kube-proxy-lv87d" Oct 28 13:21:01.689925 kubelet[2746]: I1028 13:21:01.689926 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aadb429e-f1ff-49f0-8203-dc04e1300d49-kube-proxy\") pod \"kube-proxy-lv87d\" (UID: \"aadb429e-f1ff-49f0-8203-dc04e1300d49\") " pod="kube-system/kube-proxy-lv87d" Oct 28 13:21:01.690124 kubelet[2746]: I1028 13:21:01.689947 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqmd\" (UniqueName: \"kubernetes.io/projected/aadb429e-f1ff-49f0-8203-dc04e1300d49-kube-api-access-hkqmd\") pod \"kube-proxy-lv87d\" (UID: \"aadb429e-f1ff-49f0-8203-dc04e1300d49\") " pod="kube-system/kube-proxy-lv87d" Oct 28 13:21:01.749384 systemd[1]: Created slice kubepods-besteffort-podb5da7472_f4c0_463a_b63d_ec188e427e33.slice - libcontainer container kubepods-besteffort-podb5da7472_f4c0_463a_b63d_ec188e427e33.slice. Oct 28 13:21:01.791112 kubelet[2746]: I1028 13:21:01.791062 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg6dg\" (UniqueName: \"kubernetes.io/projected/b5da7472-f4c0-463a-b63d-ec188e427e33-kube-api-access-qg6dg\") pod \"tigera-operator-65cdcdfd6d-78pn4\" (UID: \"b5da7472-f4c0-463a-b63d-ec188e427e33\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-78pn4" Oct 28 13:21:01.791112 kubelet[2746]: I1028 13:21:01.791095 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5da7472-f4c0-463a-b63d-ec188e427e33-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-78pn4\" (UID: \"b5da7472-f4c0-463a-b63d-ec188e427e33\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-78pn4" Oct 28 13:21:01.957151 kubelet[2746]: E1028 13:21:01.956999 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:01.957925 containerd[1596]: time="2025-10-28T13:21:01.957769491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv87d,Uid:aadb429e-f1ff-49f0-8203-dc04e1300d49,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:01.974653 containerd[1596]: time="2025-10-28T13:21:01.974606716Z" level=info msg="connecting to shim 3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc" address="unix:///run/containerd/s/fbbfcc9f82422d5424e091fc2632375d1f74736f18d46ce3397bb2188023c3ac" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:02.009138 systemd[1]: Started cri-containerd-3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc.scope - libcontainer container 3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc. Oct 28 13:21:02.032709 containerd[1596]: time="2025-10-28T13:21:02.032654478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv87d,Uid:aadb429e-f1ff-49f0-8203-dc04e1300d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc\"" Oct 28 13:21:02.033567 kubelet[2746]: E1028 13:21:02.033539 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:02.038606 containerd[1596]: time="2025-10-28T13:21:02.038574243Z" level=info msg="CreateContainer within sandbox \"3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 13:21:02.051062 containerd[1596]: time="2025-10-28T13:21:02.051024950Z" level=info msg="Container 65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:02.056081 containerd[1596]: time="2025-10-28T13:21:02.056056043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-78pn4,Uid:b5da7472-f4c0-463a-b63d-ec188e427e33,Namespace:tigera-operator,Attempt:0,}" Oct 28 13:21:02.060056 containerd[1596]: time="2025-10-28T13:21:02.060018310Z" level=info msg="CreateContainer within sandbox \"3fa13eb5ea9c815603cf528ac9233da89c81c17f444568d26161d5964edc46cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799\"" Oct 28 13:21:02.060607 containerd[1596]: time="2025-10-28T13:21:02.060572065Z" level=info msg="StartContainer for \"65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799\"" Oct 28 13:21:02.061896 containerd[1596]: time="2025-10-28T13:21:02.061868064Z" level=info msg="connecting to shim 65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799" address="unix:///run/containerd/s/fbbfcc9f82422d5424e091fc2632375d1f74736f18d46ce3397bb2188023c3ac" protocol=ttrpc version=3 Oct 28 13:21:02.084170 systemd[1]: Started cri-containerd-65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799.scope - libcontainer container 65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799. Oct 28 13:21:02.103134 containerd[1596]: time="2025-10-28T13:21:02.103078762Z" level=info msg="connecting to shim da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c" address="unix:///run/containerd/s/aebcd47c39ad3620cabb7b89e0295e72180a2fba386780fa9307e2bec8acd524" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:02.135091 systemd[1]: Started cri-containerd-da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c.scope - libcontainer container da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c. Oct 28 13:21:02.159102 containerd[1596]: time="2025-10-28T13:21:02.158074498Z" level=info msg="StartContainer for \"65d09e853484171bc1aeabe2720b5cf598abce6d9746efe476743842d888d799\" returns successfully" Oct 28 13:21:02.195374 containerd[1596]: time="2025-10-28T13:21:02.195314392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-78pn4,Uid:b5da7472-f4c0-463a-b63d-ec188e427e33,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c\"" Oct 28 13:21:02.198234 containerd[1596]: time="2025-10-28T13:21:02.198186904Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 13:21:02.792785 kubelet[2746]: E1028 13:21:02.792744 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:02.802026 kubelet[2746]: I1028 13:21:02.801034 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lv87d" podStartSLOduration=1.800990626 podStartE2EDuration="1.800990626s" podCreationTimestamp="2025-10-28 13:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:02.800665797 +0000 UTC m=+8.385269006" watchObservedRunningTime="2025-10-28 13:21:02.800990626 +0000 UTC m=+8.385593835" Oct 28 13:21:02.801952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084760119.mount: Deactivated successfully. Oct 28 13:21:03.322376 kubelet[2746]: E1028 13:21:03.322344 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:03.404576 kubelet[2746]: E1028 13:21:03.404533 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:03.794435 kubelet[2746]: E1028 13:21:03.794088 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:03.794435 kubelet[2746]: E1028 13:21:03.794288 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:04.076306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112438301.mount: Deactivated successfully. Oct 28 13:21:04.796630 kubelet[2746]: E1028 13:21:04.796592 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:04.848840 containerd[1596]: time="2025-10-28T13:21:04.848775134Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:04.849620 containerd[1596]: time="2025-10-28T13:21:04.849571839Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23559564" Oct 28 13:21:04.850761 containerd[1596]: time="2025-10-28T13:21:04.850724901Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:04.852808 containerd[1596]: time="2025-10-28T13:21:04.852775399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:04.853411 containerd[1596]: time="2025-10-28T13:21:04.853372114Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.655153779s" Oct 28 13:21:04.853411 containerd[1596]: time="2025-10-28T13:21:04.853404195Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 28 13:21:04.857978 containerd[1596]: time="2025-10-28T13:21:04.857894011Z" level=info msg="CreateContainer within sandbox \"da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 13:21:04.866806 containerd[1596]: time="2025-10-28T13:21:04.866763915Z" level=info msg="Container 79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:04.872856 containerd[1596]: time="2025-10-28T13:21:04.872809910Z" level=info msg="CreateContainer within sandbox \"da7cb97885a601e9c630acbb12804fc9331317aa93f5d854964f63759198911c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf\"" Oct 28 13:21:04.873363 containerd[1596]: time="2025-10-28T13:21:04.873330710Z" level=info msg="StartContainer for \"79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf\"" Oct 28 13:21:04.874229 containerd[1596]: time="2025-10-28T13:21:04.874193961Z" level=info msg="connecting to shim 79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf" address="unix:///run/containerd/s/aebcd47c39ad3620cabb7b89e0295e72180a2fba386780fa9307e2bec8acd524" protocol=ttrpc version=3 Oct 28 13:21:04.895138 systemd[1]: Started cri-containerd-79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf.scope - libcontainer container 79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf. Oct 28 13:21:04.932513 containerd[1596]: time="2025-10-28T13:21:04.932454077Z" level=info msg="StartContainer for \"79a2831ddab587c839fe4f0c8b59f326dbb961a2e4cbce79f4f23f61a53308bf\" returns successfully" Oct 28 13:21:05.810178 kubelet[2746]: I1028 13:21:05.810094 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-78pn4" podStartSLOduration=2.152667934 podStartE2EDuration="4.810074692s" podCreationTimestamp="2025-10-28 13:21:01 +0000 UTC" firstStartedPulling="2025-10-28 13:21:02.196864245 +0000 UTC m=+7.781467454" lastFinishedPulling="2025-10-28 13:21:04.854271003 +0000 UTC m=+10.438874212" observedRunningTime="2025-10-28 13:21:05.809972999 +0000 UTC m=+11.394576208" watchObservedRunningTime="2025-10-28 13:21:05.810074692 +0000 UTC m=+11.394677901" Oct 28 13:21:09.024398 update_engine[1582]: I20251028 13:21:09.024283 1582 update_attempter.cc:509] Updating boot flags... Oct 28 13:21:09.944309 sudo[1814]: pam_unix(sudo:session): session closed for user root Oct 28 13:21:09.947971 sshd[1813]: Connection closed by 10.0.0.1 port 41738 Oct 28 13:21:09.948947 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:09.955464 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:41738.service: Deactivated successfully. Oct 28 13:21:09.958837 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 13:21:09.959612 systemd[1]: session-7.scope: Consumed 7.153s CPU time, 226.8M memory peak. Oct 28 13:21:09.961332 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Oct 28 13:21:09.964716 systemd-logind[1580]: Removed session 7. Oct 28 13:21:14.059310 systemd[1]: Created slice kubepods-besteffort-pod7a764046_712f_4bc3_9a26_183f9e213fa7.slice - libcontainer container kubepods-besteffort-pod7a764046_712f_4bc3_9a26_183f9e213fa7.slice. Oct 28 13:21:14.072276 kubelet[2746]: I1028 13:21:14.072208 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a764046-712f-4bc3-9a26-183f9e213fa7-typha-certs\") pod \"calico-typha-8679499c6d-whm4m\" (UID: \"7a764046-712f-4bc3-9a26-183f9e213fa7\") " pod="calico-system/calico-typha-8679499c6d-whm4m" Oct 28 13:21:14.072276 kubelet[2746]: I1028 13:21:14.072260 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a764046-712f-4bc3-9a26-183f9e213fa7-tigera-ca-bundle\") pod \"calico-typha-8679499c6d-whm4m\" (UID: \"7a764046-712f-4bc3-9a26-183f9e213fa7\") " pod="calico-system/calico-typha-8679499c6d-whm4m" Oct 28 13:21:14.072789 kubelet[2746]: I1028 13:21:14.072303 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4sp\" (UniqueName: \"kubernetes.io/projected/7a764046-712f-4bc3-9a26-183f9e213fa7-kube-api-access-6d4sp\") pod \"calico-typha-8679499c6d-whm4m\" (UID: \"7a764046-712f-4bc3-9a26-183f9e213fa7\") " pod="calico-system/calico-typha-8679499c6d-whm4m" Oct 28 13:21:14.109586 systemd[1]: Created slice kubepods-besteffort-pod90f300b6_d8b1_456a_90e1_a004fa423520.slice - libcontainer container kubepods-besteffort-pod90f300b6_d8b1_456a_90e1_a004fa423520.slice. Oct 28 13:21:14.172930 kubelet[2746]: I1028 13:21:14.172879 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/90f300b6-d8b1-456a-90e1-a004fa423520-node-certs\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.172930 kubelet[2746]: I1028 13:21:14.172925 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-xtables-lock\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.172930 kubelet[2746]: I1028 13:21:14.172940 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-lib-modules\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173164 kubelet[2746]: I1028 13:21:14.172955 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f300b6-d8b1-456a-90e1-a004fa423520-tigera-ca-bundle\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173164 kubelet[2746]: I1028 13:21:14.173035 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-cni-bin-dir\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173164 kubelet[2746]: I1028 13:21:14.173077 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-var-run-calico\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173164 kubelet[2746]: I1028 13:21:14.173117 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-flexvol-driver-host\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173164 kubelet[2746]: I1028 13:21:14.173132 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-var-lib-calico\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173334 kubelet[2746]: I1028 13:21:14.173172 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wzn\" (UniqueName: \"kubernetes.io/projected/90f300b6-d8b1-456a-90e1-a004fa423520-kube-api-access-86wzn\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173334 kubelet[2746]: I1028 13:21:14.173189 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-cni-net-dir\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173334 kubelet[2746]: I1028 13:21:14.173243 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-policysync\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.173334 kubelet[2746]: I1028 13:21:14.173262 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/90f300b6-d8b1-456a-90e1-a004fa423520-cni-log-dir\") pod \"calico-node-llxwh\" (UID: \"90f300b6-d8b1-456a-90e1-a004fa423520\") " pod="calico-system/calico-node-llxwh" Oct 28 13:21:14.243764 kubelet[2746]: E1028 13:21:14.243695 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:14.274029 kubelet[2746]: I1028 13:21:14.273968 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdvz\" (UniqueName: \"kubernetes.io/projected/fce65538-7c69-4e45-8cae-c289c79a1bdb-kube-api-access-jpdvz\") pod \"csi-node-driver-6nhgp\" (UID: \"fce65538-7c69-4e45-8cae-c289c79a1bdb\") " pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:14.274282 kubelet[2746]: I1028 13:21:14.274171 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fce65538-7c69-4e45-8cae-c289c79a1bdb-socket-dir\") pod \"csi-node-driver-6nhgp\" (UID: \"fce65538-7c69-4e45-8cae-c289c79a1bdb\") " pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:14.274282 kubelet[2746]: I1028 13:21:14.274230 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fce65538-7c69-4e45-8cae-c289c79a1bdb-varrun\") pod \"csi-node-driver-6nhgp\" (UID: \"fce65538-7c69-4e45-8cae-c289c79a1bdb\") " pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:14.274454 kubelet[2746]: I1028 13:21:14.274393 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fce65538-7c69-4e45-8cae-c289c79a1bdb-kubelet-dir\") pod \"csi-node-driver-6nhgp\" (UID: \"fce65538-7c69-4e45-8cae-c289c79a1bdb\") " pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:14.274454 kubelet[2746]: I1028 13:21:14.274434 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fce65538-7c69-4e45-8cae-c289c79a1bdb-registration-dir\") pod \"csi-node-driver-6nhgp\" (UID: \"fce65538-7c69-4e45-8cae-c289c79a1bdb\") " pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:14.275839 kubelet[2746]: E1028 13:21:14.275806 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.275839 kubelet[2746]: W1028 13:21:14.275828 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.275920 kubelet[2746]: E1028 13:21:14.275852 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.276291 kubelet[2746]: E1028 13:21:14.276260 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.276291 kubelet[2746]: W1028 13:21:14.276277 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.276291 kubelet[2746]: E1028 13:21:14.276287 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.277946 kubelet[2746]: E1028 13:21:14.277910 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.277946 kubelet[2746]: W1028 13:21:14.277933 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.279544 kubelet[2746]: E1028 13:21:14.277952 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.282813 kubelet[2746]: E1028 13:21:14.282793 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.282813 kubelet[2746]: W1028 13:21:14.282809 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.282901 kubelet[2746]: E1028 13:21:14.282820 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.368841 kubelet[2746]: E1028 13:21:14.368633 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:14.369607 containerd[1596]: time="2025-10-28T13:21:14.369570834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8679499c6d-whm4m,Uid:7a764046-712f-4bc3-9a26-183f9e213fa7,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:14.375642 kubelet[2746]: E1028 13:21:14.375603 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.375642 kubelet[2746]: W1028 13:21:14.375637 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.375751 kubelet[2746]: E1028 13:21:14.375670 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.376091 kubelet[2746]: E1028 13:21:14.376044 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.376091 kubelet[2746]: W1028 13:21:14.376060 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.376091 kubelet[2746]: E1028 13:21:14.376072 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.376454 kubelet[2746]: E1028 13:21:14.376420 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.376454 kubelet[2746]: W1028 13:21:14.376449 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.376519 kubelet[2746]: E1028 13:21:14.376478 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.376792 kubelet[2746]: E1028 13:21:14.376774 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.376792 kubelet[2746]: W1028 13:21:14.376786 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.376848 kubelet[2746]: E1028 13:21:14.376797 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.377047 kubelet[2746]: E1028 13:21:14.377030 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.377047 kubelet[2746]: W1028 13:21:14.377041 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.377047 kubelet[2746]: E1028 13:21:14.377050 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.377272 kubelet[2746]: E1028 13:21:14.377256 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.377272 kubelet[2746]: W1028 13:21:14.377266 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.377332 kubelet[2746]: E1028 13:21:14.377276 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.377477 kubelet[2746]: E1028 13:21:14.377448 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.377477 kubelet[2746]: W1028 13:21:14.377460 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.377477 kubelet[2746]: E1028 13:21:14.377468 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.377701 kubelet[2746]: E1028 13:21:14.377684 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.377701 kubelet[2746]: W1028 13:21:14.377696 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.377755 kubelet[2746]: E1028 13:21:14.377708 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.377966 kubelet[2746]: E1028 13:21:14.377942 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.377966 kubelet[2746]: W1028 13:21:14.377957 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.377966 kubelet[2746]: E1028 13:21:14.377969 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.378220 kubelet[2746]: E1028 13:21:14.378199 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.378220 kubelet[2746]: W1028 13:21:14.378213 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.378220 kubelet[2746]: E1028 13:21:14.378222 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.378478 kubelet[2746]: E1028 13:21:14.378460 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.378478 kubelet[2746]: W1028 13:21:14.378472 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.378523 kubelet[2746]: E1028 13:21:14.378481 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.378683 kubelet[2746]: E1028 13:21:14.378667 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.378683 kubelet[2746]: W1028 13:21:14.378679 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.378741 kubelet[2746]: E1028 13:21:14.378690 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.378912 kubelet[2746]: E1028 13:21:14.378884 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.378912 kubelet[2746]: W1028 13:21:14.378895 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.378912 kubelet[2746]: E1028 13:21:14.378903 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.379101 kubelet[2746]: E1028 13:21:14.379085 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.379101 kubelet[2746]: W1028 13:21:14.379096 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.379157 kubelet[2746]: E1028 13:21:14.379105 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.379333 kubelet[2746]: E1028 13:21:14.379317 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.379333 kubelet[2746]: W1028 13:21:14.379328 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.379387 kubelet[2746]: E1028 13:21:14.379336 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.379637 kubelet[2746]: E1028 13:21:14.379611 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.379637 kubelet[2746]: W1028 13:21:14.379622 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.379637 kubelet[2746]: E1028 13:21:14.379631 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.379837 kubelet[2746]: E1028 13:21:14.379821 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.379837 kubelet[2746]: W1028 13:21:14.379832 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.379879 kubelet[2746]: E1028 13:21:14.379840 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.380065 kubelet[2746]: E1028 13:21:14.380031 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.380065 kubelet[2746]: W1028 13:21:14.380042 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.380065 kubelet[2746]: E1028 13:21:14.380050 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.380254 kubelet[2746]: E1028 13:21:14.380238 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.380254 kubelet[2746]: W1028 13:21:14.380249 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.380297 kubelet[2746]: E1028 13:21:14.380258 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.380449 kubelet[2746]: E1028 13:21:14.380433 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.380449 kubelet[2746]: W1028 13:21:14.380446 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.380500 kubelet[2746]: E1028 13:21:14.380455 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.380664 kubelet[2746]: E1028 13:21:14.380648 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.380664 kubelet[2746]: W1028 13:21:14.380659 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.380713 kubelet[2746]: E1028 13:21:14.380667 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.381019 kubelet[2746]: E1028 13:21:14.380968 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.381019 kubelet[2746]: W1028 13:21:14.380990 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.381179 kubelet[2746]: E1028 13:21:14.381057 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.381338 kubelet[2746]: E1028 13:21:14.381319 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.381338 kubelet[2746]: W1028 13:21:14.381335 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.381409 kubelet[2746]: E1028 13:21:14.381346 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.381610 kubelet[2746]: E1028 13:21:14.381591 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.381610 kubelet[2746]: W1028 13:21:14.381606 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.381694 kubelet[2746]: E1028 13:21:14.381619 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.381857 kubelet[2746]: E1028 13:21:14.381838 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.381857 kubelet[2746]: W1028 13:21:14.381851 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.381857 kubelet[2746]: E1028 13:21:14.381860 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.388624 kubelet[2746]: E1028 13:21:14.388574 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:14.388624 kubelet[2746]: W1028 13:21:14.388589 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:14.388624 kubelet[2746]: E1028 13:21:14.388601 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:14.393053 containerd[1596]: time="2025-10-28T13:21:14.393011561Z" level=info msg="connecting to shim 7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa" address="unix:///run/containerd/s/f61212b987091d415e515506a70164bbce1bf3fdc63fc158ebed0a6654c56210" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:14.415467 kubelet[2746]: E1028 13:21:14.415400 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:14.415849 containerd[1596]: time="2025-10-28T13:21:14.415815095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llxwh,Uid:90f300b6-d8b1-456a-90e1-a004fa423520,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:14.418487 systemd[1]: Started cri-containerd-7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa.scope - libcontainer container 7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa. Oct 28 13:21:14.436809 containerd[1596]: time="2025-10-28T13:21:14.436768775Z" level=info msg="connecting to shim f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea" address="unix:///run/containerd/s/26675ea7c41064c3824858db2c7ad20af87e2adbeb044fe3f9d6df0cc83f0555" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:14.459215 systemd[1]: Started cri-containerd-f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea.scope - libcontainer container f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea. Oct 28 13:21:14.474532 containerd[1596]: time="2025-10-28T13:21:14.474490860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8679499c6d-whm4m,Uid:7a764046-712f-4bc3-9a26-183f9e213fa7,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa\"" Oct 28 13:21:14.475362 kubelet[2746]: E1028 13:21:14.475340 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:14.476895 containerd[1596]: time="2025-10-28T13:21:14.476844826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 13:21:14.493560 containerd[1596]: time="2025-10-28T13:21:14.493503814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llxwh,Uid:90f300b6-d8b1-456a-90e1-a004fa423520,Namespace:calico-system,Attempt:0,} returns sandbox id \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\"" Oct 28 13:21:14.494209 kubelet[2746]: E1028 13:21:14.494174 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:15.760758 kubelet[2746]: E1028 13:21:15.760690 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:15.991307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639122210.mount: Deactivated successfully. Oct 28 13:21:17.246064 containerd[1596]: time="2025-10-28T13:21:17.245989629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:17.246808 containerd[1596]: time="2025-10-28T13:21:17.246751676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Oct 28 13:21:17.247882 containerd[1596]: time="2025-10-28T13:21:17.247851532Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:17.249840 containerd[1596]: time="2025-10-28T13:21:17.249792343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:17.250553 containerd[1596]: time="2025-10-28T13:21:17.250515367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.773377317s" Oct 28 13:21:17.250553 containerd[1596]: time="2025-10-28T13:21:17.250545434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 28 13:21:17.255809 containerd[1596]: time="2025-10-28T13:21:17.255758929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 13:21:17.270500 containerd[1596]: time="2025-10-28T13:21:17.270466496Z" level=info msg="CreateContainer within sandbox \"7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 13:21:17.277861 containerd[1596]: time="2025-10-28T13:21:17.277807394Z" level=info msg="Container 5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:17.285098 containerd[1596]: time="2025-10-28T13:21:17.285057782Z" level=info msg="CreateContainer within sandbox \"7ff05fc3f596b7db63a550521b28e8cf06318d4c573811f90dcc819f5d9b6faa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3\"" Oct 28 13:21:17.285488 containerd[1596]: time="2025-10-28T13:21:17.285457437Z" level=info msg="StartContainer for \"5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3\"" Oct 28 13:21:17.286863 containerd[1596]: time="2025-10-28T13:21:17.286501567Z" level=info msg="connecting to shim 5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3" address="unix:///run/containerd/s/f61212b987091d415e515506a70164bbce1bf3fdc63fc158ebed0a6654c56210" protocol=ttrpc version=3 Oct 28 13:21:17.307151 systemd[1]: Started cri-containerd-5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3.scope - libcontainer container 5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3. Oct 28 13:21:17.357681 containerd[1596]: time="2025-10-28T13:21:17.357635354Z" level=info msg="StartContainer for \"5b69310c226992643b2f7a189a367a83f3b67f35953f28d8c249f62d331e39c3\" returns successfully" Oct 28 13:21:17.763292 kubelet[2746]: E1028 13:21:17.763233 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:17.829941 kubelet[2746]: E1028 13:21:17.829899 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:17.844703 kubelet[2746]: I1028 13:21:17.844382 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8679499c6d-whm4m" podStartSLOduration=2.066606705 podStartE2EDuration="4.844365138s" podCreationTimestamp="2025-10-28 13:21:13 +0000 UTC" firstStartedPulling="2025-10-28 13:21:14.476467594 +0000 UTC m=+20.061070803" lastFinishedPulling="2025-10-28 13:21:17.254226027 +0000 UTC m=+22.838829236" observedRunningTime="2025-10-28 13:21:17.843578855 +0000 UTC m=+23.428182064" watchObservedRunningTime="2025-10-28 13:21:17.844365138 +0000 UTC m=+23.428968347" Oct 28 13:21:17.876472 kubelet[2746]: E1028 13:21:17.876425 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.876472 kubelet[2746]: W1028 13:21:17.876451 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.876472 kubelet[2746]: E1028 13:21:17.876473 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.876747 kubelet[2746]: E1028 13:21:17.876724 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.876747 kubelet[2746]: W1028 13:21:17.876736 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.876747 kubelet[2746]: E1028 13:21:17.876745 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.876923 kubelet[2746]: E1028 13:21:17.876903 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.876923 kubelet[2746]: W1028 13:21:17.876914 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.876923 kubelet[2746]: E1028 13:21:17.876922 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.877217 kubelet[2746]: E1028 13:21:17.877193 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.877217 kubelet[2746]: W1028 13:21:17.877205 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.877217 kubelet[2746]: E1028 13:21:17.877213 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.877416 kubelet[2746]: E1028 13:21:17.877380 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.877416 kubelet[2746]: W1028 13:21:17.877387 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.877416 kubelet[2746]: E1028 13:21:17.877395 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.877580 kubelet[2746]: E1028 13:21:17.877558 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.877580 kubelet[2746]: W1028 13:21:17.877569 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.877580 kubelet[2746]: E1028 13:21:17.877577 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.877744 kubelet[2746]: E1028 13:21:17.877730 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.877744 kubelet[2746]: W1028 13:21:17.877740 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.877789 kubelet[2746]: E1028 13:21:17.877748 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.877923 kubelet[2746]: E1028 13:21:17.877902 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.877923 kubelet[2746]: W1028 13:21:17.877913 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.877923 kubelet[2746]: E1028 13:21:17.877921 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.878128 kubelet[2746]: E1028 13:21:17.878113 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.878128 kubelet[2746]: W1028 13:21:17.878124 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.878185 kubelet[2746]: E1028 13:21:17.878133 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.878309 kubelet[2746]: E1028 13:21:17.878295 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.878309 kubelet[2746]: W1028 13:21:17.878305 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.878365 kubelet[2746]: E1028 13:21:17.878313 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.878480 kubelet[2746]: E1028 13:21:17.878465 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.878480 kubelet[2746]: W1028 13:21:17.878475 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.878520 kubelet[2746]: E1028 13:21:17.878484 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.878659 kubelet[2746]: E1028 13:21:17.878643 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.878659 kubelet[2746]: W1028 13:21:17.878654 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.878716 kubelet[2746]: E1028 13:21:17.878662 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.878831 kubelet[2746]: E1028 13:21:17.878817 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.878831 kubelet[2746]: W1028 13:21:17.878827 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.878871 kubelet[2746]: E1028 13:21:17.878836 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.879010 kubelet[2746]: E1028 13:21:17.878986 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.879010 kubelet[2746]: W1028 13:21:17.878996 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.879064 kubelet[2746]: E1028 13:21:17.879019 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.879195 kubelet[2746]: E1028 13:21:17.879181 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.879195 kubelet[2746]: W1028 13:21:17.879191 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.879243 kubelet[2746]: E1028 13:21:17.879199 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.902389 kubelet[2746]: E1028 13:21:17.902361 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.902389 kubelet[2746]: W1028 13:21:17.902378 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.902389 kubelet[2746]: E1028 13:21:17.902397 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.903147 kubelet[2746]: E1028 13:21:17.902700 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.903147 kubelet[2746]: W1028 13:21:17.902728 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.903147 kubelet[2746]: E1028 13:21:17.902753 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.903147 kubelet[2746]: E1028 13:21:17.903058 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.903147 kubelet[2746]: W1028 13:21:17.903069 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.903147 kubelet[2746]: E1028 13:21:17.903082 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.903385 kubelet[2746]: E1028 13:21:17.903349 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.903385 kubelet[2746]: W1028 13:21:17.903368 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.903385 kubelet[2746]: E1028 13:21:17.903380 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.903597 kubelet[2746]: E1028 13:21:17.903550 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.903597 kubelet[2746]: W1028 13:21:17.903558 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.903597 kubelet[2746]: E1028 13:21:17.903567 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.903791 kubelet[2746]: E1028 13:21:17.903774 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.903791 kubelet[2746]: W1028 13:21:17.903786 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.903858 kubelet[2746]: E1028 13:21:17.903794 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.904054 kubelet[2746]: E1028 13:21:17.904037 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.904054 kubelet[2746]: W1028 13:21:17.904050 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.904115 kubelet[2746]: E1028 13:21:17.904060 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.904410 kubelet[2746]: E1028 13:21:17.904391 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.904410 kubelet[2746]: W1028 13:21:17.904406 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.904473 kubelet[2746]: E1028 13:21:17.904416 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.904643 kubelet[2746]: E1028 13:21:17.904611 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.904643 kubelet[2746]: W1028 13:21:17.904625 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.904643 kubelet[2746]: E1028 13:21:17.904640 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.904878 kubelet[2746]: E1028 13:21:17.904863 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.904909 kubelet[2746]: W1028 13:21:17.904884 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.904909 kubelet[2746]: E1028 13:21:17.904895 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.905141 kubelet[2746]: E1028 13:21:17.905122 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.905141 kubelet[2746]: W1028 13:21:17.905136 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.905216 kubelet[2746]: E1028 13:21:17.905148 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.905435 kubelet[2746]: E1028 13:21:17.905414 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.905435 kubelet[2746]: W1028 13:21:17.905427 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.905435 kubelet[2746]: E1028 13:21:17.905437 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.905642 kubelet[2746]: E1028 13:21:17.905627 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.905642 kubelet[2746]: W1028 13:21:17.905637 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.905692 kubelet[2746]: E1028 13:21:17.905646 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.906066 kubelet[2746]: E1028 13:21:17.906037 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.906112 kubelet[2746]: W1028 13:21:17.906062 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.906112 kubelet[2746]: E1028 13:21:17.906086 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.906300 kubelet[2746]: E1028 13:21:17.906284 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.906300 kubelet[2746]: W1028 13:21:17.906294 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.906343 kubelet[2746]: E1028 13:21:17.906303 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.906535 kubelet[2746]: E1028 13:21:17.906518 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.906535 kubelet[2746]: W1028 13:21:17.906530 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.906583 kubelet[2746]: E1028 13:21:17.906538 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.906824 kubelet[2746]: E1028 13:21:17.906805 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.906824 kubelet[2746]: W1028 13:21:17.906819 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.906880 kubelet[2746]: E1028 13:21:17.906829 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:17.907037 kubelet[2746]: E1028 13:21:17.907020 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:17.907037 kubelet[2746]: W1028 13:21:17.907031 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:17.907080 kubelet[2746]: E1028 13:21:17.907039 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.831937 kubelet[2746]: I1028 13:21:18.831885 2746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 13:21:18.832385 kubelet[2746]: E1028 13:21:18.832255 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:18.886143 kubelet[2746]: E1028 13:21:18.886089 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.886143 kubelet[2746]: W1028 13:21:18.886114 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.886143 kubelet[2746]: E1028 13:21:18.886148 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.886408 kubelet[2746]: E1028 13:21:18.886382 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.886408 kubelet[2746]: W1028 13:21:18.886394 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.886408 kubelet[2746]: E1028 13:21:18.886402 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.886597 kubelet[2746]: E1028 13:21:18.886571 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.886597 kubelet[2746]: W1028 13:21:18.886582 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.886597 kubelet[2746]: E1028 13:21:18.886590 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.886784 kubelet[2746]: E1028 13:21:18.886758 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.886784 kubelet[2746]: W1028 13:21:18.886770 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.886784 kubelet[2746]: E1028 13:21:18.886778 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.886967 kubelet[2746]: E1028 13:21:18.886941 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.886967 kubelet[2746]: W1028 13:21:18.886953 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.886967 kubelet[2746]: E1028 13:21:18.886961 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.887178 kubelet[2746]: E1028 13:21:18.887156 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.887178 kubelet[2746]: W1028 13:21:18.887167 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.887178 kubelet[2746]: E1028 13:21:18.887175 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.887360 kubelet[2746]: E1028 13:21:18.887341 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.887360 kubelet[2746]: W1028 13:21:18.887352 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.887360 kubelet[2746]: E1028 13:21:18.887361 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.887536 kubelet[2746]: E1028 13:21:18.887519 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.887536 kubelet[2746]: W1028 13:21:18.887529 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.887536 kubelet[2746]: E1028 13:21:18.887537 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.887782 kubelet[2746]: E1028 13:21:18.887765 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.887782 kubelet[2746]: W1028 13:21:18.887776 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.887833 kubelet[2746]: E1028 13:21:18.887784 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.887952 kubelet[2746]: E1028 13:21:18.887937 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.887952 kubelet[2746]: W1028 13:21:18.887947 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.887993 kubelet[2746]: E1028 13:21:18.887955 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.888173 kubelet[2746]: E1028 13:21:18.888156 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.888173 kubelet[2746]: W1028 13:21:18.888166 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.888173 kubelet[2746]: E1028 13:21:18.888175 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.888365 kubelet[2746]: E1028 13:21:18.888350 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.888365 kubelet[2746]: W1028 13:21:18.888361 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.888407 kubelet[2746]: E1028 13:21:18.888369 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.888560 kubelet[2746]: E1028 13:21:18.888543 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.888560 kubelet[2746]: W1028 13:21:18.888554 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.888611 kubelet[2746]: E1028 13:21:18.888562 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.888735 kubelet[2746]: E1028 13:21:18.888720 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.888735 kubelet[2746]: W1028 13:21:18.888731 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.888776 kubelet[2746]: E1028 13:21:18.888739 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.888918 kubelet[2746]: E1028 13:21:18.888904 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.888918 kubelet[2746]: W1028 13:21:18.888914 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.888966 kubelet[2746]: E1028 13:21:18.888922 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.910934 kubelet[2746]: E1028 13:21:18.910906 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.910934 kubelet[2746]: W1028 13:21:18.910920 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.910934 kubelet[2746]: E1028 13:21:18.910930 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.911180 kubelet[2746]: E1028 13:21:18.911159 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.911180 kubelet[2746]: W1028 13:21:18.911171 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.911242 kubelet[2746]: E1028 13:21:18.911180 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.911519 kubelet[2746]: E1028 13:21:18.911494 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.911519 kubelet[2746]: W1028 13:21:18.911514 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.911572 kubelet[2746]: E1028 13:21:18.911532 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.911775 kubelet[2746]: E1028 13:21:18.911752 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.911775 kubelet[2746]: W1028 13:21:18.911763 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.911775 kubelet[2746]: E1028 13:21:18.911772 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.911969 kubelet[2746]: E1028 13:21:18.911953 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.911969 kubelet[2746]: W1028 13:21:18.911964 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.912060 kubelet[2746]: E1028 13:21:18.911972 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.912214 kubelet[2746]: E1028 13:21:18.912197 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.912214 kubelet[2746]: W1028 13:21:18.912208 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.912259 kubelet[2746]: E1028 13:21:18.912217 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.912522 kubelet[2746]: E1028 13:21:18.912496 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.912522 kubelet[2746]: W1028 13:21:18.912511 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.912522 kubelet[2746]: E1028 13:21:18.912521 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.912867 kubelet[2746]: E1028 13:21:18.912834 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.912867 kubelet[2746]: W1028 13:21:18.912858 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.912912 kubelet[2746]: E1028 13:21:18.912880 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.913127 kubelet[2746]: E1028 13:21:18.913099 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.913127 kubelet[2746]: W1028 13:21:18.913112 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.913127 kubelet[2746]: E1028 13:21:18.913122 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.913357 kubelet[2746]: E1028 13:21:18.913331 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.913357 kubelet[2746]: W1028 13:21:18.913348 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.913357 kubelet[2746]: E1028 13:21:18.913357 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.913604 kubelet[2746]: E1028 13:21:18.913581 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.913604 kubelet[2746]: W1028 13:21:18.913595 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.913648 kubelet[2746]: E1028 13:21:18.913606 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.913882 kubelet[2746]: E1028 13:21:18.913865 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.913882 kubelet[2746]: W1028 13:21:18.913878 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.913925 kubelet[2746]: E1028 13:21:18.913887 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.914174 kubelet[2746]: E1028 13:21:18.914154 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.914174 kubelet[2746]: W1028 13:21:18.914169 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.914228 kubelet[2746]: E1028 13:21:18.914181 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.914414 kubelet[2746]: E1028 13:21:18.914383 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.914414 kubelet[2746]: W1028 13:21:18.914398 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.914414 kubelet[2746]: E1028 13:21:18.914408 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.914695 kubelet[2746]: E1028 13:21:18.914658 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.914695 kubelet[2746]: W1028 13:21:18.914685 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.914755 kubelet[2746]: E1028 13:21:18.914714 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.915023 kubelet[2746]: E1028 13:21:18.914987 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.915023 kubelet[2746]: W1028 13:21:18.914999 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.915078 kubelet[2746]: E1028 13:21:18.915033 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.915448 kubelet[2746]: E1028 13:21:18.915430 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.915448 kubelet[2746]: W1028 13:21:18.915442 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.915527 kubelet[2746]: E1028 13:21:18.915452 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:18.915732 kubelet[2746]: E1028 13:21:18.915710 2746 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 13:21:18.915806 kubelet[2746]: W1028 13:21:18.915735 2746 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 13:21:18.915806 kubelet[2746]: E1028 13:21:18.915747 2746 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 13:21:19.442634 containerd[1596]: time="2025-10-28T13:21:19.442550082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:19.443607 containerd[1596]: time="2025-10-28T13:21:19.443574524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Oct 28 13:21:19.444820 containerd[1596]: time="2025-10-28T13:21:19.444789935Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:19.446906 containerd[1596]: time="2025-10-28T13:21:19.446871910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:19.447483 containerd[1596]: time="2025-10-28T13:21:19.447441904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.191631127s" Oct 28 13:21:19.447519 containerd[1596]: time="2025-10-28T13:21:19.447480888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 28 13:21:19.451217 containerd[1596]: time="2025-10-28T13:21:19.451189519Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 13:21:19.462138 containerd[1596]: time="2025-10-28T13:21:19.462095349Z" level=info msg="Container 0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:19.471031 containerd[1596]: time="2025-10-28T13:21:19.470984096Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3\"" Oct 28 13:21:19.471627 containerd[1596]: time="2025-10-28T13:21:19.471561785Z" level=info msg="StartContainer for \"0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3\"" Oct 28 13:21:19.472984 containerd[1596]: time="2025-10-28T13:21:19.472956655Z" level=info msg="connecting to shim 0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3" address="unix:///run/containerd/s/26675ea7c41064c3824858db2c7ad20af87e2adbeb044fe3f9d6df0cc83f0555" protocol=ttrpc version=3 Oct 28 13:21:19.500147 systemd[1]: Started cri-containerd-0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3.scope - libcontainer container 0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3. Oct 28 13:21:19.546122 containerd[1596]: time="2025-10-28T13:21:19.546079756Z" level=info msg="StartContainer for \"0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3\" returns successfully" Oct 28 13:21:19.559710 systemd[1]: cri-containerd-0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3.scope: Deactivated successfully. Oct 28 13:21:19.560303 systemd[1]: cri-containerd-0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3.scope: Consumed 42ms CPU time, 6.5M memory peak, 4.3M written to disk. Oct 28 13:21:19.561553 containerd[1596]: time="2025-10-28T13:21:19.561514764Z" level=info msg="received exit event container_id:\"0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3\" id:\"0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3\" pid:3459 exited_at:{seconds:1761657679 nanos:561055076}" Oct 28 13:21:19.583631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cdf7fa24d9ff90e280d63e4dbf10500408df72cde6a37a812532c08819420d3-rootfs.mount: Deactivated successfully. Oct 28 13:21:19.761357 kubelet[2746]: E1028 13:21:19.761207 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:19.834921 kubelet[2746]: E1028 13:21:19.834845 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:20.838817 kubelet[2746]: E1028 13:21:20.838760 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:20.840029 containerd[1596]: time="2025-10-28T13:21:20.839753998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 13:21:21.760415 kubelet[2746]: E1028 13:21:21.760350 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:23.759764 kubelet[2746]: E1028 13:21:23.759703 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:24.636952 containerd[1596]: time="2025-10-28T13:21:24.636890698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:24.637751 containerd[1596]: time="2025-10-28T13:21:24.637703849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Oct 28 13:21:24.638854 containerd[1596]: time="2025-10-28T13:21:24.638819109Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:24.640835 containerd[1596]: time="2025-10-28T13:21:24.640801190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:24.641433 containerd[1596]: time="2025-10-28T13:21:24.641402462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.801599141s" Oct 28 13:21:24.641433 containerd[1596]: time="2025-10-28T13:21:24.641430685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 28 13:21:24.645138 containerd[1596]: time="2025-10-28T13:21:24.645104080Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 13:21:24.653784 containerd[1596]: time="2025-10-28T13:21:24.653728375Z" level=info msg="Container 244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:24.663719 containerd[1596]: time="2025-10-28T13:21:24.663674097Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba\"" Oct 28 13:21:24.664207 containerd[1596]: time="2025-10-28T13:21:24.664143422Z" level=info msg="StartContainer for \"244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba\"" Oct 28 13:21:24.665561 containerd[1596]: time="2025-10-28T13:21:24.665468796Z" level=info msg="connecting to shim 244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba" address="unix:///run/containerd/s/26675ea7c41064c3824858db2c7ad20af87e2adbeb044fe3f9d6df0cc83f0555" protocol=ttrpc version=3 Oct 28 13:21:24.690154 systemd[1]: Started cri-containerd-244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba.scope - libcontainer container 244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba. Oct 28 13:21:24.730669 containerd[1596]: time="2025-10-28T13:21:24.730622727Z" level=info msg="StartContainer for \"244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba\" returns successfully" Oct 28 13:21:24.849816 kubelet[2746]: E1028 13:21:24.849772 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:25.637048 systemd[1]: cri-containerd-244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba.scope: Deactivated successfully. Oct 28 13:21:25.637439 systemd[1]: cri-containerd-244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba.scope: Consumed 664ms CPU time, 175.4M memory peak, 3.4M read from disk, 171.3M written to disk. Oct 28 13:21:25.638099 containerd[1596]: time="2025-10-28T13:21:25.638045896Z" level=info msg="received exit event container_id:\"244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba\" id:\"244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba\" pid:3518 exited_at:{seconds:1761657685 nanos:637689174}" Oct 28 13:21:25.664774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244ebe83ecb9d3c6647d67eb904df10b73fc79b3a3f70e984343552baf08f6ba-rootfs.mount: Deactivated successfully. Oct 28 13:21:25.708280 kubelet[2746]: I1028 13:21:25.708220 2746 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 28 13:21:25.765981 systemd[1]: Created slice kubepods-besteffort-podfce65538_7c69_4e45_8cae_c289c79a1bdb.slice - libcontainer container kubepods-besteffort-podfce65538_7c69_4e45_8cae_c289c79a1bdb.slice. Oct 28 13:21:25.851406 kubelet[2746]: E1028 13:21:25.851366 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:25.898649 containerd[1596]: time="2025-10-28T13:21:25.897585888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6nhgp,Uid:fce65538-7c69-4e45-8cae-c289c79a1bdb,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:25.928234 systemd[1]: Created slice kubepods-burstable-podd4a0ac67_171d_4762_ae49_9d6bac6655ce.slice - libcontainer container kubepods-burstable-podd4a0ac67_171d_4762_ae49_9d6bac6655ce.slice. Oct 28 13:21:25.936558 systemd[1]: Created slice kubepods-besteffort-pod865860d9_b904_4b8f_8efa_543ed6829f69.slice - libcontainer container kubepods-besteffort-pod865860d9_b904_4b8f_8efa_543ed6829f69.slice. Oct 28 13:21:25.945350 systemd[1]: Created slice kubepods-besteffort-pod8bf88aab_609b_4cc2_80ec_8ab913048df5.slice - libcontainer container kubepods-besteffort-pod8bf88aab_609b_4cc2_80ec_8ab913048df5.slice. Oct 28 13:21:25.954898 systemd[1]: Created slice kubepods-besteffort-podd71c8120_c049_44a0_909a_b94149145773.slice - libcontainer container kubepods-besteffort-podd71c8120_c049_44a0_909a_b94149145773.slice. Oct 28 13:21:25.963051 kubelet[2746]: I1028 13:21:25.962553 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4xp\" (UniqueName: \"kubernetes.io/projected/d71c8120-c049-44a0-909a-b94149145773-kube-api-access-9v4xp\") pod \"goldmane-7c778bb748-dgsqm\" (UID: \"d71c8120-c049-44a0-909a-b94149145773\") " pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:25.963051 kubelet[2746]: I1028 13:21:25.962608 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4a0ac67-171d-4762-ae49-9d6bac6655ce-config-volume\") pod \"coredns-66bc5c9577-g4t76\" (UID: \"d4a0ac67-171d-4762-ae49-9d6bac6655ce\") " pod="kube-system/coredns-66bc5c9577-g4t76" Oct 28 13:21:25.963051 kubelet[2746]: I1028 13:21:25.962623 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssbh\" (UniqueName: \"kubernetes.io/projected/d4a0ac67-171d-4762-ae49-9d6bac6655ce-kube-api-access-kssbh\") pod \"coredns-66bc5c9577-g4t76\" (UID: \"d4a0ac67-171d-4762-ae49-9d6bac6655ce\") " pod="kube-system/coredns-66bc5c9577-g4t76" Oct 28 13:21:25.963051 kubelet[2746]: I1028 13:21:25.962636 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d71c8120-c049-44a0-909a-b94149145773-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dgsqm\" (UID: \"d71c8120-c049-44a0-909a-b94149145773\") " pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:25.963051 kubelet[2746]: I1028 13:21:25.962650 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwd7n\" (UniqueName: \"kubernetes.io/projected/145c1556-f100-4aa7-99f2-63762d4d1688-kube-api-access-jwd7n\") pod \"coredns-66bc5c9577-lkswt\" (UID: \"145c1556-f100-4aa7-99f2-63762d4d1688\") " pod="kube-system/coredns-66bc5c9577-lkswt" Oct 28 13:21:25.963313 kubelet[2746]: I1028 13:21:25.962666 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/865860d9-b904-4b8f-8efa-543ed6829f69-calico-apiserver-certs\") pod \"calico-apiserver-786d85db64-rzgz4\" (UID: \"865860d9-b904-4b8f-8efa-543ed6829f69\") " pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" Oct 28 13:21:25.963313 kubelet[2746]: I1028 13:21:25.962687 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d71c8120-c049-44a0-909a-b94149145773-goldmane-key-pair\") pod \"goldmane-7c778bb748-dgsqm\" (UID: \"d71c8120-c049-44a0-909a-b94149145773\") " pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:25.963313 kubelet[2746]: I1028 13:21:25.962703 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjp9r\" (UniqueName: \"kubernetes.io/projected/f136d0fd-8c7a-4899-8498-1ed4f8ad5125-kube-api-access-gjp9r\") pod \"calico-kube-controllers-5879d58c5c-4p8l2\" (UID: \"f136d0fd-8c7a-4899-8498-1ed4f8ad5125\") " pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" Oct 28 13:21:25.963313 kubelet[2746]: I1028 13:21:25.962719 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5c99d49-9556-4a61-843e-22095b38496c-whisker-backend-key-pair\") pod \"whisker-7c99d7c965-hhw84\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " pod="calico-system/whisker-7c99d7c965-hhw84" Oct 28 13:21:25.963313 kubelet[2746]: I1028 13:21:25.962733 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bf88aab-609b-4cc2-80ec-8ab913048df5-calico-apiserver-certs\") pod \"calico-apiserver-786d85db64-xgd4x\" (UID: \"8bf88aab-609b-4cc2-80ec-8ab913048df5\") " pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" Oct 28 13:21:25.963434 kubelet[2746]: I1028 13:21:25.962746 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71c8120-c049-44a0-909a-b94149145773-config\") pod \"goldmane-7c778bb748-dgsqm\" (UID: \"d71c8120-c049-44a0-909a-b94149145773\") " pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:25.963434 kubelet[2746]: I1028 13:21:25.962768 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5prr6\" (UniqueName: \"kubernetes.io/projected/c5c99d49-9556-4a61-843e-22095b38496c-kube-api-access-5prr6\") pod \"whisker-7c99d7c965-hhw84\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " pod="calico-system/whisker-7c99d7c965-hhw84" Oct 28 13:21:25.963434 kubelet[2746]: I1028 13:21:25.962783 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvms\" (UniqueName: \"kubernetes.io/projected/865860d9-b904-4b8f-8efa-543ed6829f69-kube-api-access-xfvms\") pod \"calico-apiserver-786d85db64-rzgz4\" (UID: \"865860d9-b904-4b8f-8efa-543ed6829f69\") " pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" Oct 28 13:21:25.963434 kubelet[2746]: I1028 13:21:25.962798 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2l8s\" (UniqueName: \"kubernetes.io/projected/8bf88aab-609b-4cc2-80ec-8ab913048df5-kube-api-access-g2l8s\") pod \"calico-apiserver-786d85db64-xgd4x\" (UID: \"8bf88aab-609b-4cc2-80ec-8ab913048df5\") " pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" Oct 28 13:21:25.963434 kubelet[2746]: I1028 13:21:25.962815 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c99d49-9556-4a61-843e-22095b38496c-whisker-ca-bundle\") pod \"whisker-7c99d7c965-hhw84\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " pod="calico-system/whisker-7c99d7c965-hhw84" Oct 28 13:21:25.963549 kubelet[2746]: I1028 13:21:25.962828 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145c1556-f100-4aa7-99f2-63762d4d1688-config-volume\") pod \"coredns-66bc5c9577-lkswt\" (UID: \"145c1556-f100-4aa7-99f2-63762d4d1688\") " pod="kube-system/coredns-66bc5c9577-lkswt" Oct 28 13:21:25.963549 kubelet[2746]: I1028 13:21:25.962842 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f136d0fd-8c7a-4899-8498-1ed4f8ad5125-tigera-ca-bundle\") pod \"calico-kube-controllers-5879d58c5c-4p8l2\" (UID: \"f136d0fd-8c7a-4899-8498-1ed4f8ad5125\") " pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" Oct 28 13:21:25.967422 systemd[1]: Created slice kubepods-besteffort-podf136d0fd_8c7a_4899_8498_1ed4f8ad5125.slice - libcontainer container kubepods-besteffort-podf136d0fd_8c7a_4899_8498_1ed4f8ad5125.slice. Oct 28 13:21:25.981028 systemd[1]: Created slice kubepods-besteffort-podc5c99d49_9556_4a61_843e_22095b38496c.slice - libcontainer container kubepods-besteffort-podc5c99d49_9556_4a61_843e_22095b38496c.slice. Oct 28 13:21:25.989477 systemd[1]: Created slice kubepods-burstable-pod145c1556_f100_4aa7_99f2_63762d4d1688.slice - libcontainer container kubepods-burstable-pod145c1556_f100_4aa7_99f2_63762d4d1688.slice. Oct 28 13:21:26.034428 containerd[1596]: time="2025-10-28T13:21:26.034339899Z" level=error msg="Failed to destroy network for sandbox \"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.036494 systemd[1]: run-netns-cni\x2d7d072a72\x2dc7d6\x2de4d1\x2d997b\x2d452f2ff8b7c0.mount: Deactivated successfully. Oct 28 13:21:26.037667 containerd[1596]: time="2025-10-28T13:21:26.037601767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6nhgp,Uid:fce65538-7c69-4e45-8cae-c289c79a1bdb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.064254 kubelet[2746]: E1028 13:21:26.064190 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.064411 kubelet[2746]: E1028 13:21:26.064263 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:26.064411 kubelet[2746]: E1028 13:21:26.064290 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6nhgp" Oct 28 13:21:26.064411 kubelet[2746]: E1028 13:21:26.064375 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eaacc89ba495898d41145ee6a8df829d2536aa8df43d999d1e234e7f3024538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:26.236162 kubelet[2746]: E1028 13:21:26.236036 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:26.243235 containerd[1596]: time="2025-10-28T13:21:26.243174244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-rzgz4,Uid:865860d9-b904-4b8f-8efa-543ed6829f69,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:26.255461 containerd[1596]: time="2025-10-28T13:21:26.255423192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4t76,Uid:d4a0ac67-171d-4762-ae49-9d6bac6655ce,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:26.260246 containerd[1596]: time="2025-10-28T13:21:26.260207895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-xgd4x,Uid:8bf88aab-609b-4cc2-80ec-8ab913048df5,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:26.267394 containerd[1596]: time="2025-10-28T13:21:26.267358500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgsqm,Uid:d71c8120-c049-44a0-909a-b94149145773,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:26.274523 containerd[1596]: time="2025-10-28T13:21:26.274474228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5879d58c5c-4p8l2,Uid:f136d0fd-8c7a-4899-8498-1ed4f8ad5125,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:26.292822 containerd[1596]: time="2025-10-28T13:21:26.292780474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c99d7c965-hhw84,Uid:c5c99d49-9556-4a61-843e-22095b38496c,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:26.296047 kubelet[2746]: E1028 13:21:26.295789 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:26.296397 containerd[1596]: time="2025-10-28T13:21:26.296329111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkswt,Uid:145c1556-f100-4aa7-99f2-63762d4d1688,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:26.303074 containerd[1596]: time="2025-10-28T13:21:26.303015162Z" level=error msg="Failed to destroy network for sandbox \"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.309757 containerd[1596]: time="2025-10-28T13:21:26.309633054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-rzgz4,Uid:865860d9-b904-4b8f-8efa-543ed6829f69,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.311391 kubelet[2746]: E1028 13:21:26.310255 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.311391 kubelet[2746]: E1028 13:21:26.310331 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" Oct 28 13:21:26.311391 kubelet[2746]: E1028 13:21:26.310352 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" Oct 28 13:21:26.311536 kubelet[2746]: E1028 13:21:26.310431 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786d85db64-rzgz4_calico-apiserver(865860d9-b904-4b8f-8efa-543ed6829f69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786d85db64-rzgz4_calico-apiserver(865860d9-b904-4b8f-8efa-543ed6829f69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75dc62b81a86c36c94016c3a2101e930caf882dd56aecc3e50a87f4588ed0536\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" podUID="865860d9-b904-4b8f-8efa-543ed6829f69" Oct 28 13:21:26.346899 containerd[1596]: time="2025-10-28T13:21:26.346843640Z" level=error msg="Failed to destroy network for sandbox \"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.355183 containerd[1596]: time="2025-10-28T13:21:26.355139049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4t76,Uid:d4a0ac67-171d-4762-ae49-9d6bac6655ce,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.356040 kubelet[2746]: E1028 13:21:26.355737 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.356040 kubelet[2746]: E1028 13:21:26.355810 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4t76" Oct 28 13:21:26.356040 kubelet[2746]: E1028 13:21:26.355834 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g4t76" Oct 28 13:21:26.356171 kubelet[2746]: E1028 13:21:26.355890 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g4t76_kube-system(d4a0ac67-171d-4762-ae49-9d6bac6655ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g4t76_kube-system(d4a0ac67-171d-4762-ae49-9d6bac6655ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57c316dabfbaf30007b0bc330b1929dc89f4e5db0af593d4a6124e6d2d8815f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g4t76" podUID="d4a0ac67-171d-4762-ae49-9d6bac6655ce" Oct 28 13:21:26.359386 containerd[1596]: time="2025-10-28T13:21:26.359359630Z" level=error msg="Failed to destroy network for sandbox \"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.364229 containerd[1596]: time="2025-10-28T13:21:26.364199307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-xgd4x,Uid:8bf88aab-609b-4cc2-80ec-8ab913048df5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.364730 kubelet[2746]: E1028 13:21:26.364603 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.364730 kubelet[2746]: E1028 13:21:26.364670 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" Oct 28 13:21:26.364730 kubelet[2746]: E1028 13:21:26.364690 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" Oct 28 13:21:26.366282 kubelet[2746]: E1028 13:21:26.365983 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786d85db64-xgd4x_calico-apiserver(8bf88aab-609b-4cc2-80ec-8ab913048df5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786d85db64-xgd4x_calico-apiserver(8bf88aab-609b-4cc2-80ec-8ab913048df5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86b191cf731a03ae6c4a2ac1d15afe6363c8b274b78a9df6c9ca032f7ad33a43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:21:26.369932 containerd[1596]: time="2025-10-28T13:21:26.369781841Z" level=error msg="Failed to destroy network for sandbox \"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.372883 containerd[1596]: time="2025-10-28T13:21:26.372808697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgsqm,Uid:d71c8120-c049-44a0-909a-b94149145773,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.373177 kubelet[2746]: E1028 13:21:26.373147 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.373236 kubelet[2746]: E1028 13:21:26.373192 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:26.373236 kubelet[2746]: E1028 13:21:26.373209 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dgsqm" Oct 28 13:21:26.373369 kubelet[2746]: E1028 13:21:26.373254 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dgsqm_calico-system(d71c8120-c049-44a0-909a-b94149145773)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dgsqm_calico-system(d71c8120-c049-44a0-909a-b94149145773)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0abb37c873fa4b7ed1632e54dc87d8f61936a5cbb3d2fa59b03ea46ed06da17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:21:26.389619 containerd[1596]: time="2025-10-28T13:21:26.389567550Z" level=error msg="Failed to destroy network for sandbox \"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.392239 containerd[1596]: time="2025-10-28T13:21:26.392195705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5879d58c5c-4p8l2,Uid:f136d0fd-8c7a-4899-8498-1ed4f8ad5125,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.392520 kubelet[2746]: E1028 13:21:26.392469 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.392563 kubelet[2746]: E1028 13:21:26.392540 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" Oct 28 13:21:26.392598 kubelet[2746]: E1028 13:21:26.392566 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" Oct 28 13:21:26.392662 kubelet[2746]: E1028 13:21:26.392633 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5879d58c5c-4p8l2_calico-system(f136d0fd-8c7a-4899-8498-1ed4f8ad5125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5879d58c5c-4p8l2_calico-system(f136d0fd-8c7a-4899-8498-1ed4f8ad5125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1119632ef093796b244b821a58616e281ee3a835439040d521f04f23edeb68c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:21:26.407832 containerd[1596]: time="2025-10-28T13:21:26.407782704Z" level=error msg="Failed to destroy network for sandbox \"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.410441 containerd[1596]: time="2025-10-28T13:21:26.410245728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c99d7c965-hhw84,Uid:c5c99d49-9556-4a61-843e-22095b38496c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.410522 kubelet[2746]: E1028 13:21:26.410423 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.410522 kubelet[2746]: E1028 13:21:26.410458 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c99d7c965-hhw84" Oct 28 13:21:26.410522 kubelet[2746]: E1028 13:21:26.410475 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c99d7c965-hhw84" Oct 28 13:21:26.410590 kubelet[2746]: E1028 13:21:26.410524 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c99d7c965-hhw84_calico-system(c5c99d49-9556-4a61-843e-22095b38496c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c99d7c965-hhw84_calico-system(c5c99d49-9556-4a61-843e-22095b38496c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85744dbb93f675a51fddab5eb6e25a05fdaf3ef1e08410edde497934df9f35b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c99d7c965-hhw84" podUID="c5c99d49-9556-4a61-843e-22095b38496c" Oct 28 13:21:26.422985 containerd[1596]: time="2025-10-28T13:21:26.422931428Z" level=error msg="Failed to destroy network for sandbox \"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.425453 containerd[1596]: time="2025-10-28T13:21:26.425383792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkswt,Uid:145c1556-f100-4aa7-99f2-63762d4d1688,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.425816 kubelet[2746]: E1028 13:21:26.425762 2746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 13:21:26.425958 kubelet[2746]: E1028 13:21:26.425832 2746 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lkswt" Oct 28 13:21:26.425958 kubelet[2746]: E1028 13:21:26.425855 2746 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lkswt" Oct 28 13:21:26.426219 kubelet[2746]: E1028 13:21:26.425958 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lkswt_kube-system(145c1556-f100-4aa7-99f2-63762d4d1688)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lkswt_kube-system(145c1556-f100-4aa7-99f2-63762d4d1688)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42c6f4d5672a97926c7072a8f576dc20265885245d0c81b802a1df9a463a089d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lkswt" podUID="145c1556-f100-4aa7-99f2-63762d4d1688" Oct 28 13:21:26.856674 kubelet[2746]: E1028 13:21:26.856637 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:26.859194 containerd[1596]: time="2025-10-28T13:21:26.859149281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 13:21:30.735043 systemd[1]: Started sshd@7-10.0.0.148:22-10.0.0.1:45022.service - OpenSSH per-connection server daemon (10.0.0.1:45022). Oct 28 13:21:30.796225 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 45022 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:30.798361 sshd-session[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:30.803452 systemd-logind[1580]: New session 8 of user core. Oct 28 13:21:30.812143 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 13:21:30.934555 sshd[3827]: Connection closed by 10.0.0.1 port 45022 Oct 28 13:21:30.934853 sshd-session[3824]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:30.939955 systemd[1]: sshd@7-10.0.0.148:22-10.0.0.1:45022.service: Deactivated successfully. Oct 28 13:21:30.942063 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 13:21:30.942787 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Oct 28 13:21:30.943889 systemd-logind[1580]: Removed session 8. Oct 28 13:21:34.363391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997987330.mount: Deactivated successfully. Oct 28 13:21:35.169551 containerd[1596]: time="2025-10-28T13:21:35.169487056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:35.170417 containerd[1596]: time="2025-10-28T13:21:35.170371337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Oct 28 13:21:35.171630 containerd[1596]: time="2025-10-28T13:21:35.171598112Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:35.173548 containerd[1596]: time="2025-10-28T13:21:35.173515094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 13:21:35.174057 containerd[1596]: time="2025-10-28T13:21:35.173997139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.314805068s" Oct 28 13:21:35.174057 containerd[1596]: time="2025-10-28T13:21:35.174054277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 28 13:21:35.186700 containerd[1596]: time="2025-10-28T13:21:35.186669749Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 13:21:35.198268 containerd[1596]: time="2025-10-28T13:21:35.198232744Z" level=info msg="Container d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:35.206804 containerd[1596]: time="2025-10-28T13:21:35.206753033Z" level=info msg="CreateContainer within sandbox \"f19af96750e2e5eb9359a95a8168123d47d02962f759dd42bba6255c762e1bea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf\"" Oct 28 13:21:35.207505 containerd[1596]: time="2025-10-28T13:21:35.207463237Z" level=info msg="StartContainer for \"d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf\"" Oct 28 13:21:35.208927 containerd[1596]: time="2025-10-28T13:21:35.208900007Z" level=info msg="connecting to shim d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf" address="unix:///run/containerd/s/26675ea7c41064c3824858db2c7ad20af87e2adbeb044fe3f9d6df0cc83f0555" protocol=ttrpc version=3 Oct 28 13:21:35.235140 systemd[1]: Started cri-containerd-d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf.scope - libcontainer container d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf. Oct 28 13:21:35.401467 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 13:21:35.403508 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 13:21:35.467977 containerd[1596]: time="2025-10-28T13:21:35.467464904Z" level=info msg="StartContainer for \"d8ac7568de9d1a7d9cdecddc21fb58bed361504e93f66e106dfea51ad84cdbcf\" returns successfully" Oct 28 13:21:35.621127 kubelet[2746]: I1028 13:21:35.621074 2746 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5c99d49-9556-4a61-843e-22095b38496c-whisker-backend-key-pair\") pod \"c5c99d49-9556-4a61-843e-22095b38496c\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " Oct 28 13:21:35.621127 kubelet[2746]: I1028 13:21:35.621139 2746 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c99d49-9556-4a61-843e-22095b38496c-whisker-ca-bundle\") pod \"c5c99d49-9556-4a61-843e-22095b38496c\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " Oct 28 13:21:35.621667 kubelet[2746]: I1028 13:21:35.621163 2746 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5prr6\" (UniqueName: \"kubernetes.io/projected/c5c99d49-9556-4a61-843e-22095b38496c-kube-api-access-5prr6\") pod \"c5c99d49-9556-4a61-843e-22095b38496c\" (UID: \"c5c99d49-9556-4a61-843e-22095b38496c\") " Oct 28 13:21:35.622618 kubelet[2746]: I1028 13:21:35.622574 2746 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c99d49-9556-4a61-843e-22095b38496c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c5c99d49-9556-4a61-843e-22095b38496c" (UID: "c5c99d49-9556-4a61-843e-22095b38496c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 13:21:35.626548 kubelet[2746]: I1028 13:21:35.626520 2746 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c99d49-9556-4a61-843e-22095b38496c-kube-api-access-5prr6" (OuterVolumeSpecName: "kube-api-access-5prr6") pod "c5c99d49-9556-4a61-843e-22095b38496c" (UID: "c5c99d49-9556-4a61-843e-22095b38496c"). InnerVolumeSpecName "kube-api-access-5prr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 13:21:35.629640 systemd[1]: var-lib-kubelet-pods-c5c99d49\x2d9556\x2d4a61\x2d843e\x2d22095b38496c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5prr6.mount: Deactivated successfully. Oct 28 13:21:35.632151 kubelet[2746]: I1028 13:21:35.631801 2746 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c99d49-9556-4a61-843e-22095b38496c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c5c99d49-9556-4a61-843e-22095b38496c" (UID: "c5c99d49-9556-4a61-843e-22095b38496c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 13:21:35.633816 systemd[1]: var-lib-kubelet-pods-c5c99d49\x2d9556\x2d4a61\x2d843e\x2d22095b38496c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 13:21:35.721863 kubelet[2746]: I1028 13:21:35.721750 2746 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5c99d49-9556-4a61-843e-22095b38496c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 13:21:35.721863 kubelet[2746]: I1028 13:21:35.721777 2746 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5prr6\" (UniqueName: \"kubernetes.io/projected/c5c99d49-9556-4a61-843e-22095b38496c-kube-api-access-5prr6\") on node \"localhost\" DevicePath \"\"" Oct 28 13:21:35.721863 kubelet[2746]: I1028 13:21:35.721786 2746 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5c99d49-9556-4a61-843e-22095b38496c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 13:21:35.878836 kubelet[2746]: E1028 13:21:35.878803 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:35.885300 systemd[1]: Removed slice kubepods-besteffort-podc5c99d49_9556_4a61_843e_22095b38496c.slice - libcontainer container kubepods-besteffort-podc5c99d49_9556_4a61_843e_22095b38496c.slice. Oct 28 13:21:35.904039 kubelet[2746]: I1028 13:21:35.903963 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-llxwh" podStartSLOduration=1.224534002 podStartE2EDuration="21.903940859s" podCreationTimestamp="2025-10-28 13:21:14 +0000 UTC" firstStartedPulling="2025-10-28 13:21:14.495186383 +0000 UTC m=+20.079789592" lastFinishedPulling="2025-10-28 13:21:35.17459324 +0000 UTC m=+40.759196449" observedRunningTime="2025-10-28 13:21:35.89514283 +0000 UTC m=+41.479746049" watchObservedRunningTime="2025-10-28 13:21:35.903940859 +0000 UTC m=+41.488544058" Oct 28 13:21:35.952175 systemd[1]: Started sshd@8-10.0.0.148:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Oct 28 13:21:35.962362 systemd[1]: Created slice kubepods-besteffort-pod086c3970_45c7_4eae_a705_114504249cb8.slice - libcontainer container kubepods-besteffort-pod086c3970_45c7_4eae_a705_114504249cb8.slice. Oct 28 13:21:36.024696 kubelet[2746]: I1028 13:21:36.024322 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64dbm\" (UniqueName: \"kubernetes.io/projected/086c3970-45c7-4eae-a705-114504249cb8-kube-api-access-64dbm\") pod \"whisker-7d487b5797-djvbp\" (UID: \"086c3970-45c7-4eae-a705-114504249cb8\") " pod="calico-system/whisker-7d487b5797-djvbp" Oct 28 13:21:36.024696 kubelet[2746]: I1028 13:21:36.024380 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/086c3970-45c7-4eae-a705-114504249cb8-whisker-backend-key-pair\") pod \"whisker-7d487b5797-djvbp\" (UID: \"086c3970-45c7-4eae-a705-114504249cb8\") " pod="calico-system/whisker-7d487b5797-djvbp" Oct 28 13:21:36.024696 kubelet[2746]: I1028 13:21:36.024405 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/086c3970-45c7-4eae-a705-114504249cb8-whisker-ca-bundle\") pod \"whisker-7d487b5797-djvbp\" (UID: \"086c3970-45c7-4eae-a705-114504249cb8\") " pod="calico-system/whisker-7d487b5797-djvbp" Oct 28 13:21:36.025185 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:36.026751 sshd-session[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:36.033251 systemd-logind[1580]: New session 9 of user core. Oct 28 13:21:36.044194 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 13:21:36.180079 sshd[3947]: Connection closed by 10.0.0.1 port 45034 Oct 28 13:21:36.180411 sshd-session[3918]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:36.184574 systemd[1]: sshd@8-10.0.0.148:22-10.0.0.1:45034.service: Deactivated successfully. Oct 28 13:21:36.186851 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 13:21:36.188914 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Oct 28 13:21:36.190543 systemd-logind[1580]: Removed session 9. Oct 28 13:21:36.268137 containerd[1596]: time="2025-10-28T13:21:36.268079666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d487b5797-djvbp,Uid:086c3970-45c7-4eae-a705-114504249cb8,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:36.411557 systemd-networkd[1501]: calic4055062cac: Link UP Oct 28 13:21:36.411774 systemd-networkd[1501]: calic4055062cac: Gained carrier Oct 28 13:21:36.426241 containerd[1596]: 2025-10-28 13:21:36.291 [INFO][3962] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:36.426241 containerd[1596]: 2025-10-28 13:21:36.308 [INFO][3962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d487b5797--djvbp-eth0 whisker-7d487b5797- calico-system 086c3970-45c7-4eae-a705-114504249cb8 970 0 2025-10-28 13:21:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d487b5797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d487b5797-djvbp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic4055062cac [] [] }} ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-" Oct 28 13:21:36.426241 containerd[1596]: 2025-10-28 13:21:36.308 [INFO][3962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.426241 containerd[1596]: 2025-10-28 13:21:36.366 [INFO][3977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" HandleID="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Workload="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.367 [INFO][3977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" HandleID="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Workload="localhost-k8s-whisker--7d487b5797--djvbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052b1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d487b5797-djvbp", "timestamp":"2025-10-28 13:21:36.366607334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.367 [INFO][3977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.367 [INFO][3977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.367 [INFO][3977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.375 [INFO][3977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" host="localhost" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.382 [INFO][3977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.385 [INFO][3977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.387 [INFO][3977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.389 [INFO][3977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:36.426537 containerd[1596]: 2025-10-28 13:21:36.389 [INFO][3977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" host="localhost" Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.390 [INFO][3977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7 Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.393 [INFO][3977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" host="localhost" Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.398 [INFO][3977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" host="localhost" Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.398 [INFO][3977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" host="localhost" Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.398 [INFO][3977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:36.426778 containerd[1596]: 2025-10-28 13:21:36.398 [INFO][3977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" HandleID="k8s-pod-network.ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Workload="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.426920 containerd[1596]: 2025-10-28 13:21:36.403 [INFO][3962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d487b5797--djvbp-eth0", GenerateName:"whisker-7d487b5797-", Namespace:"calico-system", SelfLink:"", UID:"086c3970-45c7-4eae-a705-114504249cb8", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d487b5797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d487b5797-djvbp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4055062cac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:36.426920 containerd[1596]: 2025-10-28 13:21:36.403 [INFO][3962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.426998 containerd[1596]: 2025-10-28 13:21:36.403 [INFO][3962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4055062cac ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.426998 containerd[1596]: 2025-10-28 13:21:36.411 [INFO][3962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.427075 containerd[1596]: 2025-10-28 13:21:36.411 [INFO][3962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d487b5797--djvbp-eth0", GenerateName:"whisker-7d487b5797-", Namespace:"calico-system", SelfLink:"", UID:"086c3970-45c7-4eae-a705-114504249cb8", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d487b5797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7", Pod:"whisker-7d487b5797-djvbp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4055062cac", MAC:"ea:2a:77:3f:c5:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:36.427128 containerd[1596]: 2025-10-28 13:21:36.422 [INFO][3962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" Namespace="calico-system" Pod="whisker-7d487b5797-djvbp" WorkloadEndpoint="localhost-k8s-whisker--7d487b5797--djvbp-eth0" Oct 28 13:21:36.490292 containerd[1596]: time="2025-10-28T13:21:36.490231258Z" level=info msg="connecting to shim ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7" address="unix:///run/containerd/s/a0603c5d7fa2db6cbb9788630712acba9791cd48caadfe4726a56081d3b50472" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:36.519142 systemd[1]: Started cri-containerd-ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7.scope - libcontainer container ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7. Oct 28 13:21:36.532552 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:36.562226 containerd[1596]: time="2025-10-28T13:21:36.562173853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d487b5797-djvbp,Uid:086c3970-45c7-4eae-a705-114504249cb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad159e26e41578e0f67f745d4cd61218c8c33eb5458001174965eb3c0eb216b7\"" Oct 28 13:21:36.564052 containerd[1596]: time="2025-10-28T13:21:36.563988021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:21:36.763922 kubelet[2746]: I1028 13:21:36.763766 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c99d49-9556-4a61-843e-22095b38496c" path="/var/lib/kubelet/pods/c5c99d49-9556-4a61-843e-22095b38496c/volumes" Oct 28 13:21:36.881407 kubelet[2746]: E1028 13:21:36.881367 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:36.926176 containerd[1596]: time="2025-10-28T13:21:36.926129553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:36.929805 containerd[1596]: time="2025-10-28T13:21:36.929756617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:21:36.929905 containerd[1596]: time="2025-10-28T13:21:36.929880520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:36.930822 kubelet[2746]: E1028 13:21:36.930154 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:21:36.930822 kubelet[2746]: E1028 13:21:36.930203 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:21:36.930822 kubelet[2746]: E1028 13:21:36.930282 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:36.935345 containerd[1596]: time="2025-10-28T13:21:36.935300392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:21:37.415916 containerd[1596]: time="2025-10-28T13:21:37.415859954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:37.416996 containerd[1596]: time="2025-10-28T13:21:37.416930505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:21:37.417185 containerd[1596]: time="2025-10-28T13:21:37.416988394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:37.417215 kubelet[2746]: E1028 13:21:37.417175 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:21:37.417269 kubelet[2746]: E1028 13:21:37.417217 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:21:37.417329 kubelet[2746]: E1028 13:21:37.417304 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:37.417453 kubelet[2746]: E1028 13:21:37.417415 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d487b5797-djvbp" podUID="086c3970-45c7-4eae-a705-114504249cb8" Oct 28 13:21:37.827114 kubelet[2746]: E1028 13:21:37.826949 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:37.827749 containerd[1596]: time="2025-10-28T13:21:37.827575785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkswt,Uid:145c1556-f100-4aa7-99f2-63762d4d1688,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:37.888401 kubelet[2746]: E1028 13:21:37.888275 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d487b5797-djvbp" podUID="086c3970-45c7-4eae-a705-114504249cb8" Oct 28 13:21:37.980857 systemd-networkd[1501]: calid7360146036: Link UP Oct 28 13:21:37.981104 systemd-networkd[1501]: calid7360146036: Gained carrier Oct 28 13:21:37.996621 containerd[1596]: 2025-10-28 13:21:37.863 [INFO][4172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:37.996621 containerd[1596]: 2025-10-28 13:21:37.876 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--lkswt-eth0 coredns-66bc5c9577- kube-system 145c1556-f100-4aa7-99f2-63762d4d1688 841 0 2025-10-28 13:21:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-lkswt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7360146036 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-" Oct 28 13:21:37.996621 containerd[1596]: 2025-10-28 13:21:37.876 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.996621 containerd[1596]: 2025-10-28 13:21:37.927 [INFO][4195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" HandleID="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Workload="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.928 [INFO][4195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" HandleID="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Workload="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000527400), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-lkswt", "timestamp":"2025-10-28 13:21:37.927902362 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.928 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.928 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.928 [INFO][4195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.934 [INFO][4195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" host="localhost" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.957 [INFO][4195] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.962 [INFO][4195] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.964 [INFO][4195] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.966 [INFO][4195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:37.996900 containerd[1596]: 2025-10-28 13:21:37.966 [INFO][4195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" host="localhost" Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.967 [INFO][4195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.970 [INFO][4195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" host="localhost" Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.975 [INFO][4195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" host="localhost" Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.975 [INFO][4195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" host="localhost" Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.975 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:37.997172 containerd[1596]: 2025-10-28 13:21:37.975 [INFO][4195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" HandleID="k8s-pod-network.a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Workload="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.978 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lkswt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145c1556-f100-4aa7-99f2-63762d4d1688", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-lkswt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7360146036", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.978 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.978 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7360146036 ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.981 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.981 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lkswt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"145c1556-f100-4aa7-99f2-63762d4d1688", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff", Pod:"coredns-66bc5c9577-lkswt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7360146036", MAC:"e6:8f:0e:de:b8:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:37.997300 containerd[1596]: 2025-10-28 13:21:37.992 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" Namespace="kube-system" Pod="coredns-66bc5c9577-lkswt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lkswt-eth0" Oct 28 13:21:38.039217 systemd-networkd[1501]: calic4055062cac: Gained IPv6LL Oct 28 13:21:38.053866 containerd[1596]: time="2025-10-28T13:21:38.053795794Z" level=info msg="connecting to shim a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff" address="unix:///run/containerd/s/d3b7ad6f19a5e3acb5b8dd4ea9351d503c46adf315cec340149ea77ba0f642b7" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:38.086128 systemd[1]: Started cri-containerd-a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff.scope - libcontainer container a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff. Oct 28 13:21:38.099650 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:38.132080 containerd[1596]: time="2025-10-28T13:21:38.131952771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lkswt,Uid:145c1556-f100-4aa7-99f2-63762d4d1688,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff\"" Oct 28 13:21:38.133181 kubelet[2746]: E1028 13:21:38.132881 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:38.137382 containerd[1596]: time="2025-10-28T13:21:38.137349147Z" level=info msg="CreateContainer within sandbox \"a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:21:38.154914 containerd[1596]: time="2025-10-28T13:21:38.153682185Z" level=info msg="Container b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:38.161382 containerd[1596]: time="2025-10-28T13:21:38.161333245Z" level=info msg="CreateContainer within sandbox \"a8f47ab5e9429e422d3da2aa677ecf899e51aee22f6e2e9e2456876289c19dff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16\"" Oct 28 13:21:38.162901 containerd[1596]: time="2025-10-28T13:21:38.161873059Z" level=info msg="StartContainer for \"b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16\"" Oct 28 13:21:38.162901 containerd[1596]: time="2025-10-28T13:21:38.162668934Z" level=info msg="connecting to shim b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16" address="unix:///run/containerd/s/d3b7ad6f19a5e3acb5b8dd4ea9351d503c46adf315cec340149ea77ba0f642b7" protocol=ttrpc version=3 Oct 28 13:21:38.185143 systemd[1]: Started cri-containerd-b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16.scope - libcontainer container b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16. Oct 28 13:21:38.217641 containerd[1596]: time="2025-10-28T13:21:38.217593947Z" level=info msg="StartContainer for \"b3ccae30c6699afedc7e9da7e73cc364df53790a3f7627d81f8b4d8be0eefb16\" returns successfully" Oct 28 13:21:38.763302 containerd[1596]: time="2025-10-28T13:21:38.763240812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-xgd4x,Uid:8bf88aab-609b-4cc2-80ec-8ab913048df5,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:38.861170 systemd-networkd[1501]: calia42c8b4585f: Link UP Oct 28 13:21:38.864181 systemd-networkd[1501]: calia42c8b4585f: Gained carrier Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.789 [INFO][4297] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.799 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0 calico-apiserver-786d85db64- calico-apiserver 8bf88aab-609b-4cc2-80ec-8ab913048df5 836 0 2025-10-28 13:21:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786d85db64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786d85db64-xgd4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia42c8b4585f [] [] }} ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.800 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.828 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" HandleID="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Workload="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.828 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" HandleID="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Workload="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786d85db64-xgd4x", "timestamp":"2025-10-28 13:21:38.828281224 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.828 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.828 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.828 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.835 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.839 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.843 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.845 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.846 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.846 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.847 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203 Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.851 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.855 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.855 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" host="localhost" Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.855 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:38.877962 containerd[1596]: 2025-10-28 13:21:38.855 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" HandleID="k8s-pod-network.08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Workload="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.859 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0", GenerateName:"calico-apiserver-786d85db64-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bf88aab-609b-4cc2-80ec-8ab913048df5", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786d85db64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786d85db64-xgd4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia42c8b4585f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.859 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.859 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia42c8b4585f ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.862 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.864 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0", GenerateName:"calico-apiserver-786d85db64-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bf88aab-609b-4cc2-80ec-8ab913048df5", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786d85db64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203", Pod:"calico-apiserver-786d85db64-xgd4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia42c8b4585f", MAC:"9a:e5:02:03:1c:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:38.878699 containerd[1596]: 2025-10-28 13:21:38.873 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-xgd4x" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--xgd4x-eth0" Oct 28 13:21:38.888483 kubelet[2746]: E1028 13:21:38.888389 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:38.999134 systemd-networkd[1501]: calid7360146036: Gained IPv6LL Oct 28 13:21:39.060308 kubelet[2746]: I1028 13:21:39.059521 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lkswt" podStartSLOduration=38.059502062 podStartE2EDuration="38.059502062s" podCreationTimestamp="2025-10-28 13:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:39.058277692 +0000 UTC m=+44.642880901" watchObservedRunningTime="2025-10-28 13:21:39.059502062 +0000 UTC m=+44.644105271" Oct 28 13:21:39.084616 containerd[1596]: time="2025-10-28T13:21:39.084518945Z" level=info msg="connecting to shim 08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203" address="unix:///run/containerd/s/3944fc2acd7807dc913958fd669e81cde705f5045f3a51f7db1528ae5cfa40f7" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:39.117221 systemd[1]: Started cri-containerd-08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203.scope - libcontainer container 08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203. Oct 28 13:21:39.129281 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:39.161251 containerd[1596]: time="2025-10-28T13:21:39.161206138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-xgd4x,Uid:8bf88aab-609b-4cc2-80ec-8ab913048df5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08c1cd596110f6a3a52ab8ad68589c673d7984f50d99280eb16a8e5860169203\"" Oct 28 13:21:39.162690 containerd[1596]: time="2025-10-28T13:21:39.162662403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:21:39.574715 containerd[1596]: time="2025-10-28T13:21:39.574648558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:39.575977 containerd[1596]: time="2025-10-28T13:21:39.575940073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:21:39.576104 containerd[1596]: time="2025-10-28T13:21:39.576032897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:39.576258 kubelet[2746]: E1028 13:21:39.576195 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:39.576258 kubelet[2746]: E1028 13:21:39.576244 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:39.576361 kubelet[2746]: E1028 13:21:39.576336 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786d85db64-xgd4x_calico-apiserver(8bf88aab-609b-4cc2-80ec-8ab913048df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:39.576392 kubelet[2746]: E1028 13:21:39.576373 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:21:39.763647 containerd[1596]: time="2025-10-28T13:21:39.763596052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6nhgp,Uid:fce65538-7c69-4e45-8cae-c289c79a1bdb,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:39.765496 containerd[1596]: time="2025-10-28T13:21:39.765436318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5879d58c5c-4p8l2,Uid:f136d0fd-8c7a-4899-8498-1ed4f8ad5125,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:39.895643 kubelet[2746]: E1028 13:21:39.895597 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:39.896412 kubelet[2746]: E1028 13:21:39.896375 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:21:39.908494 systemd-networkd[1501]: cali90330730eb5: Link UP Oct 28 13:21:39.909183 systemd-networkd[1501]: cali90330730eb5: Gained carrier Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.797 [INFO][4400] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.809 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6nhgp-eth0 csi-node-driver- calico-system fce65538-7c69-4e45-8cae-c289c79a1bdb 719 0 2025-10-28 13:21:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6nhgp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90330730eb5 [] [] }} ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.809 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.839 [INFO][4431] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" HandleID="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Workload="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.839 [INFO][4431] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" HandleID="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Workload="localhost-k8s-csi--node--driver--6nhgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c1800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6nhgp", "timestamp":"2025-10-28 13:21:39.839069867 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.839 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.839 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.841 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.846 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.850 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.854 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.855 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.857 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.857 [INFO][4431] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.858 [INFO][4431] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4 Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.892 [INFO][4431] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4431] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" host="localhost" Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:40.057654 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4431] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" HandleID="k8s-pod-network.b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Workload="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:39.906 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6nhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fce65538-7c69-4e45-8cae-c289c79a1bdb", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6nhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90330730eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:39.906 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:39.906 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90330730eb5 ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:39.908 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:39.909 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6nhgp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fce65538-7c69-4e45-8cae-c289c79a1bdb", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4", Pod:"csi-node-driver-6nhgp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90330730eb5", MAC:"92:71:92:bb:b5:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.058434 containerd[1596]: 2025-10-28 13:21:40.040 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" Namespace="calico-system" Pod="csi-node-driver-6nhgp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6nhgp-eth0" Oct 28 13:21:40.230207 systemd-networkd[1501]: cali69d1de4c103: Link UP Oct 28 13:21:40.230443 systemd-networkd[1501]: cali69d1de4c103: Gained carrier Oct 28 13:21:40.245695 containerd[1596]: time="2025-10-28T13:21:40.245424404Z" level=info msg="connecting to shim b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4" address="unix:///run/containerd/s/af0a4cfa2b821dd046d8e6f5a2f638888d074d585246fb6b2b19bb8eae0c5db4" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.797 [INFO][4405] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.808 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0 calico-kube-controllers-5879d58c5c- calico-system f136d0fd-8c7a-4899-8498-1ed4f8ad5125 840 0 2025-10-28 13:21:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5879d58c5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5879d58c5c-4p8l2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali69d1de4c103 [] [] }} ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.808 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.841 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" HandleID="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Workload="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.841 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" HandleID="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Workload="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5879d58c5c-4p8l2", "timestamp":"2025-10-28 13:21:39.8417338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.841 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:39.902 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.038 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.101 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.106 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.108 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.110 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.110 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.113 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512 Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.192 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.213 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.213 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" host="localhost" Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.213 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:40.267818 containerd[1596]: 2025-10-28 13:21:40.213 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" HandleID="k8s-pod-network.397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Workload="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.224 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0", GenerateName:"calico-kube-controllers-5879d58c5c-", Namespace:"calico-system", SelfLink:"", UID:"f136d0fd-8c7a-4899-8498-1ed4f8ad5125", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5879d58c5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5879d58c5c-4p8l2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69d1de4c103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.224 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.224 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69d1de4c103 ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.230 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.234 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0", GenerateName:"calico-kube-controllers-5879d58c5c-", Namespace:"calico-system", SelfLink:"", UID:"f136d0fd-8c7a-4899-8498-1ed4f8ad5125", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5879d58c5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512", Pod:"calico-kube-controllers-5879d58c5c-4p8l2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali69d1de4c103", MAC:"12:22:d9:bf:41:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.268399 containerd[1596]: 2025-10-28 13:21:40.250 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" Namespace="calico-system" Pod="calico-kube-controllers-5879d58c5c-4p8l2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5879d58c5c--4p8l2-eth0" Oct 28 13:21:40.286182 systemd[1]: Started cri-containerd-b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4.scope - libcontainer container b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4. Oct 28 13:21:40.299653 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:40.300500 containerd[1596]: time="2025-10-28T13:21:40.300428987Z" level=info msg="connecting to shim 397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512" address="unix:///run/containerd/s/d0582a78096ea1c069882013876123ac72d8a94ab0b47060c0c9c552feafcfaa" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:40.320766 containerd[1596]: time="2025-10-28T13:21:40.320620434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6nhgp,Uid:fce65538-7c69-4e45-8cae-c289c79a1bdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8f30fd12d67daebae586590c7a263a2a902b8316aa8a5a18a55a85b3abbfad4\"" Oct 28 13:21:40.322979 containerd[1596]: time="2025-10-28T13:21:40.322950921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:21:40.330155 systemd[1]: Started cri-containerd-397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512.scope - libcontainer container 397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512. Oct 28 13:21:40.345057 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:40.378372 containerd[1596]: time="2025-10-28T13:21:40.378322403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5879d58c5c-4p8l2,Uid:f136d0fd-8c7a-4899-8498-1ed4f8ad5125,Namespace:calico-system,Attempt:0,} returns sandbox id \"397b1e085f2f6a8fdf37e7c8fbbeed1a75bed53c07cd936106e5ee1e50462512\"" Oct 28 13:21:40.469202 systemd-networkd[1501]: calia42c8b4585f: Gained IPv6LL Oct 28 13:21:40.764033 kubelet[2746]: E1028 13:21:40.763838 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:40.764908 containerd[1596]: time="2025-10-28T13:21:40.764806028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4t76,Uid:d4a0ac67-171d-4762-ae49-9d6bac6655ce,Namespace:kube-system,Attempt:0,}" Oct 28 13:21:40.766460 containerd[1596]: time="2025-10-28T13:21:40.766418165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-rzgz4,Uid:865860d9-b904-4b8f-8efa-543ed6829f69,Namespace:calico-apiserver,Attempt:0,}" Oct 28 13:21:40.769105 containerd[1596]: time="2025-10-28T13:21:40.769074834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgsqm,Uid:d71c8120-c049-44a0-909a-b94149145773,Namespace:calico-system,Attempt:0,}" Oct 28 13:21:40.893887 systemd-networkd[1501]: cali938fdb8bbb7: Link UP Oct 28 13:21:40.894199 systemd-networkd[1501]: cali938fdb8bbb7: Gained carrier Oct 28 13:21:40.902726 kubelet[2746]: E1028 13:21:40.902626 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:40.904813 kubelet[2746]: E1028 13:21:40.904776 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.799 [INFO][4580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.812 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--g4t76-eth0 coredns-66bc5c9577- kube-system d4a0ac67-171d-4762-ae49-9d6bac6655ce 829 0 2025-10-28 13:21:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-g4t76 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali938fdb8bbb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.812 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.852 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" HandleID="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Workload="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.853 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" HandleID="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Workload="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033b840), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-g4t76", "timestamp":"2025-10-28 13:21:40.852585839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.853 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.853 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.853 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.860 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.864 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.868 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.870 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.871 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.872 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.873 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143 Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.876 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.881 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.881 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" host="localhost" Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.881 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:40.907447 containerd[1596]: 2025-10-28 13:21:40.881 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" HandleID="k8s-pod-network.c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Workload="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.888 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4t76-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4a0ac67-171d-4762-ae49-9d6bac6655ce", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-g4t76", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938fdb8bbb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.891 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.891 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali938fdb8bbb7 ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.894 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.894 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--g4t76-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4a0ac67-171d-4762-ae49-9d6bac6655ce", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143", Pod:"coredns-66bc5c9577-g4t76", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938fdb8bbb7", MAC:"26:ec:79:bb:b1:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:40.908040 containerd[1596]: 2025-10-28 13:21:40.900 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" Namespace="kube-system" Pod="coredns-66bc5c9577-g4t76" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--g4t76-eth0" Oct 28 13:21:40.928641 containerd[1596]: time="2025-10-28T13:21:40.928595937Z" level=info msg="connecting to shim c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143" address="unix:///run/containerd/s/e73099bf662e8d05fa8c69f7cd9739c31615cd967b03c7c2b5e4e5e605f20a0e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:40.956160 systemd[1]: Started cri-containerd-c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143.scope - libcontainer container c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143. Oct 28 13:21:40.975532 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:40.981143 systemd-networkd[1501]: cali90330730eb5: Gained IPv6LL Oct 28 13:21:40.993450 systemd-networkd[1501]: calia7f6d8efe79: Link UP Oct 28 13:21:40.996948 systemd-networkd[1501]: calia7f6d8efe79: Gained carrier Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.808 [INFO][4600] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.822 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--dgsqm-eth0 goldmane-7c778bb748- calico-system d71c8120-c049-44a0-909a-b94149145773 837 0 2025-10-28 13:21:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-dgsqm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia7f6d8efe79 [] [] }} ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.822 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.856 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" HandleID="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Workload="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.857 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" HandleID="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Workload="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-dgsqm", "timestamp":"2025-10-28 13:21:40.856765868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.857 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.882 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.883 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.961 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.968 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.972 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.973 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.976 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.976 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.977 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7 Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.980 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" host="localhost" Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:41.013588 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" HandleID="k8s-pod-network.bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Workload="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:40.990 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dgsqm-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d71c8120-c049-44a0-909a-b94149145773", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-dgsqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f6d8efe79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:40.990 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:40.990 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7f6d8efe79 ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:40.996 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:40.999 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dgsqm-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d71c8120-c049-44a0-909a-b94149145773", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7", Pod:"goldmane-7c778bb748-dgsqm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia7f6d8efe79", MAC:"ee:d0:3c:c9:cd:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:41.014356 containerd[1596]: 2025-10-28 13:21:41.010 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" Namespace="calico-system" Pod="goldmane-7c778bb748-dgsqm" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dgsqm-eth0" Oct 28 13:21:41.014356 containerd[1596]: time="2025-10-28T13:21:41.014180213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4t76,Uid:d4a0ac67-171d-4762-ae49-9d6bac6655ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143\"" Oct 28 13:21:41.015168 kubelet[2746]: E1028 13:21:41.015122 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:41.026677 containerd[1596]: time="2025-10-28T13:21:41.026627986Z" level=info msg="CreateContainer within sandbox \"c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 13:21:41.038304 containerd[1596]: time="2025-10-28T13:21:41.038252384Z" level=info msg="Container c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f: CDI devices from CRI Config.CDIDevices: []" Oct 28 13:21:41.044036 containerd[1596]: time="2025-10-28T13:21:41.043983566Z" level=info msg="CreateContainer within sandbox \"c2e28968821cffb36c8c3934a06e81f0374f267b7ee246dbed736ef0905be143\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f\"" Oct 28 13:21:41.044542 containerd[1596]: time="2025-10-28T13:21:41.044510014Z" level=info msg="StartContainer for \"c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f\"" Oct 28 13:21:41.045937 containerd[1596]: time="2025-10-28T13:21:41.045907758Z" level=info msg="connecting to shim c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f" address="unix:///run/containerd/s/e73099bf662e8d05fa8c69f7cd9739c31615cd967b03c7c2b5e4e5e605f20a0e" protocol=ttrpc version=3 Oct 28 13:21:41.049214 containerd[1596]: time="2025-10-28T13:21:41.049179572Z" level=info msg="connecting to shim bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7" address="unix:///run/containerd/s/8f8c62d1aac1e3aa2aa53e9cf0cc8c3ace259daed2b095cec3b966d939584741" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:41.067462 systemd[1]: Started cri-containerd-c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f.scope - libcontainer container c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f. Oct 28 13:21:41.077144 containerd[1596]: time="2025-10-28T13:21:41.077053002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:41.078573 containerd[1596]: time="2025-10-28T13:21:41.078275356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:21:41.078573 containerd[1596]: time="2025-10-28T13:21:41.078362530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:41.078668 kubelet[2746]: E1028 13:21:41.078535 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:21:41.078709 kubelet[2746]: E1028 13:21:41.078667 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:21:41.080025 kubelet[2746]: E1028 13:21:41.078922 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:41.080075 containerd[1596]: time="2025-10-28T13:21:41.079600856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:21:41.080066 systemd[1]: Started cri-containerd-bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7.scope - libcontainer container bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7. Oct 28 13:21:41.101233 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:41.111285 systemd-networkd[1501]: cali1a5346d3772: Link UP Oct 28 13:21:41.112603 systemd-networkd[1501]: cali1a5346d3772: Gained carrier Oct 28 13:21:41.112770 containerd[1596]: time="2025-10-28T13:21:41.112652487Z" level=info msg="StartContainer for \"c4ea0aec0c9e9663f0c0b4b0eaccf0d78b1d6208955818aed482192d9a2fe11f\" returns successfully" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.802 [INFO][4586] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.815 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0 calico-apiserver-786d85db64- calico-apiserver 865860d9-b904-4b8f-8efa-543ed6829f69 835 0 2025-10-28 13:21:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786d85db64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-786d85db64-rzgz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a5346d3772 [] [] }} ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.816 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.861 [INFO][4630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" HandleID="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Workload="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.861 [INFO][4630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" HandleID="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Workload="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-786d85db64-rzgz4", "timestamp":"2025-10-28 13:21:40.861591731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.861 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:40.987 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.063 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.070 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.077 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.081 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.087 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.087 [INFO][4630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.090 [INFO][4630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.094 [INFO][4630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.102 [INFO][4630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.102 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" host="localhost" Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.102 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 13:21:41.136394 containerd[1596]: 2025-10-28 13:21:41.102 [INFO][4630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" HandleID="k8s-pod-network.28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Workload="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.106 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0", GenerateName:"calico-apiserver-786d85db64-", Namespace:"calico-apiserver", SelfLink:"", UID:"865860d9-b904-4b8f-8efa-543ed6829f69", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786d85db64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-786d85db64-rzgz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a5346d3772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.106 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.106 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a5346d3772 ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.114 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.116 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0", GenerateName:"calico-apiserver-786d85db64-", Namespace:"calico-apiserver", SelfLink:"", UID:"865860d9-b904-4b8f-8efa-543ed6829f69", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 13, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786d85db64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a", Pod:"calico-apiserver-786d85db64-rzgz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a5346d3772", MAC:"de:5b:26:d0:50:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 13:21:41.136963 containerd[1596]: 2025-10-28 13:21:41.126 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" Namespace="calico-apiserver" Pod="calico-apiserver-786d85db64-rzgz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--786d85db64--rzgz4-eth0" Oct 28 13:21:41.142461 containerd[1596]: time="2025-10-28T13:21:41.142418089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dgsqm,Uid:d71c8120-c049-44a0-909a-b94149145773,Namespace:calico-system,Attempt:0,} returns sandbox id \"bec0c6a74789e75c0816d3ba25a00caaf8e2da1ff788e418c7d72522cb6f16b7\"" Oct 28 13:21:41.162106 containerd[1596]: time="2025-10-28T13:21:41.162042068Z" level=info msg="connecting to shim 28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a" address="unix:///run/containerd/s/f415a207558aee7d6caa21eaf5a99e2ce3f13bc6cba2eeacc9bd9acb7ccadc0e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 13:21:41.186165 systemd[1]: Started cri-containerd-28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a.scope - libcontainer container 28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a. Oct 28 13:21:41.194160 systemd[1]: Started sshd@9-10.0.0.148:22-10.0.0.1:45408.service - OpenSSH per-connection server daemon (10.0.0.1:45408). Oct 28 13:21:41.210270 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 13:21:41.255536 containerd[1596]: time="2025-10-28T13:21:41.255479498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786d85db64-rzgz4,Uid:865860d9-b904-4b8f-8efa-543ed6829f69,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28c6d5aa680f4da1894cbf6d97d4799824a5291d40c05ff5b33a7ad01b387c6a\"" Oct 28 13:21:41.266977 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 45408 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:41.269691 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:41.276293 systemd-logind[1580]: New session 10 of user core. Oct 28 13:21:41.283154 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 13:21:41.444833 sshd[4862]: Connection closed by 10.0.0.1 port 45408 Oct 28 13:21:41.445174 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:41.453788 systemd[1]: sshd@9-10.0.0.148:22-10.0.0.1:45408.service: Deactivated successfully. Oct 28 13:21:41.455878 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 13:21:41.456787 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Oct 28 13:21:41.460193 systemd[1]: Started sshd@10-10.0.0.148:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Oct 28 13:21:41.460835 systemd-logind[1580]: Removed session 10. Oct 28 13:21:41.521348 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:41.522693 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:41.527344 systemd-logind[1580]: New session 11 of user core. Oct 28 13:21:41.539146 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 13:21:41.576409 containerd[1596]: time="2025-10-28T13:21:41.576360998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:41.577740 containerd[1596]: time="2025-10-28T13:21:41.577692189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:21:41.577830 containerd[1596]: time="2025-10-28T13:21:41.577778731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:41.578045 kubelet[2746]: E1028 13:21:41.577964 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:21:41.578045 kubelet[2746]: E1028 13:21:41.578039 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:21:41.578349 kubelet[2746]: E1028 13:21:41.578274 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5879d58c5c-4p8l2_calico-system(f136d0fd-8c7a-4899-8498-1ed4f8ad5125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:41.578349 kubelet[2746]: E1028 13:21:41.578333 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:21:41.578470 containerd[1596]: time="2025-10-28T13:21:41.578378757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:21:41.680267 sshd[4888]: Connection closed by 10.0.0.1 port 45412 Oct 28 13:21:41.681397 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:41.689172 systemd[1]: sshd@10-10.0.0.148:22-10.0.0.1:45412.service: Deactivated successfully. Oct 28 13:21:41.691487 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 13:21:41.694452 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Oct 28 13:21:41.697374 systemd-logind[1580]: Removed session 11. Oct 28 13:21:41.701349 systemd[1]: Started sshd@11-10.0.0.148:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Oct 28 13:21:41.750325 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:41.751639 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:41.756244 systemd-logind[1580]: New session 12 of user core. Oct 28 13:21:41.771141 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 13:21:41.889993 sshd[4904]: Connection closed by 10.0.0.1 port 45426 Oct 28 13:21:41.890370 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:41.895385 systemd[1]: sshd@11-10.0.0.148:22-10.0.0.1:45426.service: Deactivated successfully. Oct 28 13:21:41.897640 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 13:21:41.899375 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Oct 28 13:21:41.900285 systemd-logind[1580]: Removed session 12. Oct 28 13:21:41.909041 kubelet[2746]: E1028 13:21:41.907868 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:41.909928 kubelet[2746]: E1028 13:21:41.909868 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:21:41.917795 kubelet[2746]: I1028 13:21:41.917540 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4t76" podStartSLOduration=40.917526397 podStartE2EDuration="40.917526397s" podCreationTimestamp="2025-10-28 13:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 13:21:41.91693723 +0000 UTC m=+47.501540439" watchObservedRunningTime="2025-10-28 13:21:41.917526397 +0000 UTC m=+47.502129606" Oct 28 13:21:41.955095 containerd[1596]: time="2025-10-28T13:21:41.955036871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:41.956587 containerd[1596]: time="2025-10-28T13:21:41.956553147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:21:41.956702 containerd[1596]: time="2025-10-28T13:21:41.956646223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:41.956863 kubelet[2746]: E1028 13:21:41.956809 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:21:41.956912 kubelet[2746]: E1028 13:21:41.956869 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:21:41.957257 kubelet[2746]: E1028 13:21:41.957137 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:41.957257 kubelet[2746]: E1028 13:21:41.957203 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:41.957351 containerd[1596]: time="2025-10-28T13:21:41.957334495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:21:42.197243 systemd-networkd[1501]: cali69d1de4c103: Gained IPv6LL Oct 28 13:21:42.324116 containerd[1596]: time="2025-10-28T13:21:42.324045262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:42.422567 containerd[1596]: time="2025-10-28T13:21:42.422421602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:21:42.422567 containerd[1596]: time="2025-10-28T13:21:42.422469101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:42.423029 kubelet[2746]: E1028 13:21:42.422925 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:21:42.423258 kubelet[2746]: E1028 13:21:42.423236 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:21:42.423559 kubelet[2746]: E1028 13:21:42.423525 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgsqm_calico-system(d71c8120-c049-44a0-909a-b94149145773): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:42.423610 kubelet[2746]: E1028 13:21:42.423581 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:21:42.424430 containerd[1596]: time="2025-10-28T13:21:42.424353760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:21:42.517187 systemd-networkd[1501]: cali1a5346d3772: Gained IPv6LL Oct 28 13:21:42.795941 containerd[1596]: time="2025-10-28T13:21:42.795848963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:42.798294 containerd[1596]: time="2025-10-28T13:21:42.798246986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:21:42.798385 containerd[1596]: time="2025-10-28T13:21:42.798289767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:42.798462 kubelet[2746]: E1028 13:21:42.798427 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:42.798513 kubelet[2746]: E1028 13:21:42.798472 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:42.798580 kubelet[2746]: E1028 13:21:42.798559 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786d85db64-rzgz4_calico-apiserver(865860d9-b904-4b8f-8efa-543ed6829f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:42.798624 kubelet[2746]: E1028 13:21:42.798593 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" podUID="865860d9-b904-4b8f-8efa-543ed6829f69" Oct 28 13:21:42.837166 systemd-networkd[1501]: cali938fdb8bbb7: Gained IPv6LL Oct 28 13:21:42.901105 systemd-networkd[1501]: calia7f6d8efe79: Gained IPv6LL Oct 28 13:21:42.911761 kubelet[2746]: E1028 13:21:42.911697 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:42.912747 kubelet[2746]: E1028 13:21:42.912709 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:21:42.913335 kubelet[2746]: E1028 13:21:42.913296 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:42.913464 kubelet[2746]: E1028 13:21:42.913395 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" podUID="865860d9-b904-4b8f-8efa-543ed6829f69" Oct 28 13:21:44.099691 kubelet[2746]: I1028 13:21:44.099610 2746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 13:21:44.100857 kubelet[2746]: E1028 13:21:44.100657 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:44.923650 kubelet[2746]: E1028 13:21:44.923171 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:21:45.529061 systemd-networkd[1501]: vxlan.calico: Link UP Oct 28 13:21:45.529074 systemd-networkd[1501]: vxlan.calico: Gained carrier Oct 28 13:21:46.912654 systemd[1]: Started sshd@12-10.0.0.148:22-10.0.0.1:34194.service - OpenSSH per-connection server daemon (10.0.0.1:34194). Oct 28 13:21:46.979380 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 34194 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:46.981145 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:46.986329 systemd-logind[1580]: New session 13 of user core. Oct 28 13:21:46.995150 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 13:21:47.128855 sshd[5126]: Connection closed by 10.0.0.1 port 34194 Oct 28 13:21:47.129198 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:47.134363 systemd[1]: sshd@12-10.0.0.148:22-10.0.0.1:34194.service: Deactivated successfully. Oct 28 13:21:47.136569 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 13:21:47.137345 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Oct 28 13:21:47.138719 systemd-logind[1580]: Removed session 13. Oct 28 13:21:47.446195 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Oct 28 13:21:52.146209 systemd[1]: Started sshd@13-10.0.0.148:22-10.0.0.1:34208.service - OpenSSH per-connection server daemon (10.0.0.1:34208). Oct 28 13:21:52.198023 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 34208 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:52.199275 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:52.203348 systemd-logind[1580]: New session 14 of user core. Oct 28 13:21:52.207171 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 13:21:52.331843 sshd[5151]: Connection closed by 10.0.0.1 port 34208 Oct 28 13:21:52.332180 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:52.337128 systemd[1]: sshd@13-10.0.0.148:22-10.0.0.1:34208.service: Deactivated successfully. Oct 28 13:21:52.339375 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 13:21:52.340222 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Oct 28 13:21:52.341575 systemd-logind[1580]: Removed session 14. Oct 28 13:21:52.762788 containerd[1596]: time="2025-10-28T13:21:52.762715646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:21:53.191629 containerd[1596]: time="2025-10-28T13:21:53.191561021Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:53.192929 containerd[1596]: time="2025-10-28T13:21:53.192867082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:21:53.193017 containerd[1596]: time="2025-10-28T13:21:53.192953685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:53.193179 kubelet[2746]: E1028 13:21:53.193113 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:21:53.193592 kubelet[2746]: E1028 13:21:53.193180 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:21:53.193592 kubelet[2746]: E1028 13:21:53.193286 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:53.194401 containerd[1596]: time="2025-10-28T13:21:53.194351717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:21:53.556303 containerd[1596]: time="2025-10-28T13:21:53.556154603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:53.570455 containerd[1596]: time="2025-10-28T13:21:53.557552587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:21:53.570646 containerd[1596]: time="2025-10-28T13:21:53.557612549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:53.570766 kubelet[2746]: E1028 13:21:53.570714 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:21:53.570821 kubelet[2746]: E1028 13:21:53.570771 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:21:53.571148 kubelet[2746]: E1028 13:21:53.570937 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:53.571148 kubelet[2746]: E1028 13:21:53.571027 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d487b5797-djvbp" podUID="086c3970-45c7-4eae-a705-114504249cb8" Oct 28 13:21:54.765034 containerd[1596]: time="2025-10-28T13:21:54.764924079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:21:55.064727 containerd[1596]: time="2025-10-28T13:21:55.064600935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:55.065824 containerd[1596]: time="2025-10-28T13:21:55.065790006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:21:55.065937 containerd[1596]: time="2025-10-28T13:21:55.065855518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:55.066053 kubelet[2746]: E1028 13:21:55.065968 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:55.066359 kubelet[2746]: E1028 13:21:55.066058 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:55.066359 kubelet[2746]: E1028 13:21:55.066142 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786d85db64-xgd4x_calico-apiserver(8bf88aab-609b-4cc2-80ec-8ab913048df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:55.066359 kubelet[2746]: E1028 13:21:55.066174 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:21:55.761594 containerd[1596]: time="2025-10-28T13:21:55.761539784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:21:56.164089 containerd[1596]: time="2025-10-28T13:21:56.163993606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:56.165230 containerd[1596]: time="2025-10-28T13:21:56.165181094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:21:56.165445 containerd[1596]: time="2025-10-28T13:21:56.165264290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:56.165552 kubelet[2746]: E1028 13:21:56.165488 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:21:56.165552 kubelet[2746]: E1028 13:21:56.165543 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:21:56.166068 kubelet[2746]: E1028 13:21:56.165635 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgsqm_calico-system(d71c8120-c049-44a0-909a-b94149145773): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:56.166068 kubelet[2746]: E1028 13:21:56.165670 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:21:57.350023 systemd[1]: Started sshd@14-10.0.0.148:22-10.0.0.1:44908.service - OpenSSH per-connection server daemon (10.0.0.1:44908). Oct 28 13:21:57.409738 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 44908 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:21:57.411119 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:21:57.415985 systemd-logind[1580]: New session 15 of user core. Oct 28 13:21:57.430183 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 13:21:57.552057 sshd[5177]: Connection closed by 10.0.0.1 port 44908 Oct 28 13:21:57.552403 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Oct 28 13:21:57.557710 systemd[1]: sshd@14-10.0.0.148:22-10.0.0.1:44908.service: Deactivated successfully. Oct 28 13:21:57.559981 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 13:21:57.560964 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Oct 28 13:21:57.562143 systemd-logind[1580]: Removed session 15. Oct 28 13:21:57.762084 containerd[1596]: time="2025-10-28T13:21:57.762035855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:21:58.081667 containerd[1596]: time="2025-10-28T13:21:58.081532071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:58.082782 containerd[1596]: time="2025-10-28T13:21:58.082716293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:21:58.082782 containerd[1596]: time="2025-10-28T13:21:58.082748273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:58.082928 kubelet[2746]: E1028 13:21:58.082887 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:21:58.083374 kubelet[2746]: E1028 13:21:58.082928 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:21:58.083374 kubelet[2746]: E1028 13:21:58.083130 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5879d58c5c-4p8l2_calico-system(f136d0fd-8c7a-4899-8498-1ed4f8ad5125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:58.083374 kubelet[2746]: E1028 13:21:58.083180 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:21:58.083467 containerd[1596]: time="2025-10-28T13:21:58.083289448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:21:58.460346 containerd[1596]: time="2025-10-28T13:21:58.460284797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:58.461524 containerd[1596]: time="2025-10-28T13:21:58.461480511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:21:58.461586 containerd[1596]: time="2025-10-28T13:21:58.461557014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:58.461750 kubelet[2746]: E1028 13:21:58.461692 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:21:58.461830 kubelet[2746]: E1028 13:21:58.461759 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:21:58.461987 kubelet[2746]: E1028 13:21:58.461864 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:58.463034 containerd[1596]: time="2025-10-28T13:21:58.462782053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:21:58.872631 containerd[1596]: time="2025-10-28T13:21:58.872579184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:58.873872 containerd[1596]: time="2025-10-28T13:21:58.873827516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:21:58.873953 containerd[1596]: time="2025-10-28T13:21:58.873909279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:58.874118 kubelet[2746]: E1028 13:21:58.874070 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:21:58.874172 kubelet[2746]: E1028 13:21:58.874121 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:21:58.874376 kubelet[2746]: E1028 13:21:58.874340 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:58.874512 kubelet[2746]: E1028 13:21:58.874399 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:21:58.874604 containerd[1596]: time="2025-10-28T13:21:58.874422683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:21:59.235404 containerd[1596]: time="2025-10-28T13:21:59.235270084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:21:59.236548 containerd[1596]: time="2025-10-28T13:21:59.236503118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:21:59.236593 containerd[1596]: time="2025-10-28T13:21:59.236547962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:21:59.236750 kubelet[2746]: E1028 13:21:59.236698 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:59.237081 kubelet[2746]: E1028 13:21:59.236754 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:21:59.237081 kubelet[2746]: E1028 13:21:59.236834 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786d85db64-rzgz4_calico-apiserver(865860d9-b904-4b8f-8efa-543ed6829f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:21:59.237081 kubelet[2746]: E1028 13:21:59.236863 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" podUID="865860d9-b904-4b8f-8efa-543ed6829f69" Oct 28 13:22:02.569409 systemd[1]: Started sshd@15-10.0.0.148:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). Oct 28 13:22:02.631142 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:02.632423 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:02.636886 systemd-logind[1580]: New session 16 of user core. Oct 28 13:22:02.645135 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 13:22:02.761209 sshd[5196]: Connection closed by 10.0.0.1 port 44924 Oct 28 13:22:02.762232 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:02.767762 systemd[1]: sshd@15-10.0.0.148:22-10.0.0.1:44924.service: Deactivated successfully. Oct 28 13:22:02.770043 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 13:22:02.770817 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Oct 28 13:22:02.772579 systemd-logind[1580]: Removed session 16. Oct 28 13:22:04.762193 kubelet[2746]: E1028 13:22:04.762123 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d487b5797-djvbp" podUID="086c3970-45c7-4eae-a705-114504249cb8" Oct 28 13:22:06.966231 kubelet[2746]: E1028 13:22:06.966194 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:07.760863 kubelet[2746]: E1028 13:22:07.760821 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:22:07.776577 systemd[1]: Started sshd@16-10.0.0.148:22-10.0.0.1:48098.service - OpenSSH per-connection server daemon (10.0.0.1:48098). Oct 28 13:22:07.840699 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 48098 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:07.842143 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:07.846559 systemd-logind[1580]: New session 17 of user core. Oct 28 13:22:07.856137 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 13:22:07.979716 sshd[5247]: Connection closed by 10.0.0.1 port 48098 Oct 28 13:22:07.980104 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:07.989885 systemd[1]: sshd@16-10.0.0.148:22-10.0.0.1:48098.service: Deactivated successfully. Oct 28 13:22:07.992050 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 13:22:07.992970 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Oct 28 13:22:07.995933 systemd[1]: Started sshd@17-10.0.0.148:22-10.0.0.1:48100.service - OpenSSH per-connection server daemon (10.0.0.1:48100). Oct 28 13:22:07.997120 systemd-logind[1580]: Removed session 17. Oct 28 13:22:08.057807 sshd[5260]: Accepted publickey for core from 10.0.0.1 port 48100 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:08.059582 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:08.063984 systemd-logind[1580]: New session 18 of user core. Oct 28 13:22:08.072136 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 13:22:08.276817 sshd[5263]: Connection closed by 10.0.0.1 port 48100 Oct 28 13:22:08.277258 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:08.286813 systemd[1]: sshd@17-10.0.0.148:22-10.0.0.1:48100.service: Deactivated successfully. Oct 28 13:22:08.288977 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 13:22:08.289881 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Oct 28 13:22:08.292971 systemd[1]: Started sshd@18-10.0.0.148:22-10.0.0.1:48116.service - OpenSSH per-connection server daemon (10.0.0.1:48116). Oct 28 13:22:08.293794 systemd-logind[1580]: Removed session 18. Oct 28 13:22:08.347266 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 48116 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:08.348555 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:08.353131 systemd-logind[1580]: New session 19 of user core. Oct 28 13:22:08.363152 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 13:22:08.763439 kubelet[2746]: E1028 13:22:08.763315 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:22:08.970053 sshd[5278]: Connection closed by 10.0.0.1 port 48116 Oct 28 13:22:08.968317 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:08.981793 systemd[1]: sshd@18-10.0.0.148:22-10.0.0.1:48116.service: Deactivated successfully. Oct 28 13:22:08.985215 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 13:22:08.990162 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Oct 28 13:22:08.998233 systemd[1]: Started sshd@19-10.0.0.148:22-10.0.0.1:48132.service - OpenSSH per-connection server daemon (10.0.0.1:48132). Oct 28 13:22:09.001370 systemd-logind[1580]: Removed session 19. Oct 28 13:22:09.066729 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 48132 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:09.068053 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:09.072780 systemd-logind[1580]: New session 20 of user core. Oct 28 13:22:09.082176 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 13:22:09.297350 sshd[5299]: Connection closed by 10.0.0.1 port 48132 Oct 28 13:22:09.299027 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:09.309987 systemd[1]: sshd@19-10.0.0.148:22-10.0.0.1:48132.service: Deactivated successfully. Oct 28 13:22:09.312394 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 13:22:09.313369 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Oct 28 13:22:09.316645 systemd[1]: Started sshd@20-10.0.0.148:22-10.0.0.1:48134.service - OpenSSH per-connection server daemon (10.0.0.1:48134). Oct 28 13:22:09.317368 systemd-logind[1580]: Removed session 20. Oct 28 13:22:09.372101 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 48134 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:09.373992 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:09.379621 systemd-logind[1580]: New session 21 of user core. Oct 28 13:22:09.385161 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 13:22:09.494583 sshd[5314]: Connection closed by 10.0.0.1 port 48134 Oct 28 13:22:09.494942 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:09.499863 systemd[1]: sshd@20-10.0.0.148:22-10.0.0.1:48134.service: Deactivated successfully. Oct 28 13:22:09.502202 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 13:22:09.503308 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Oct 28 13:22:09.504599 systemd-logind[1580]: Removed session 21. Oct 28 13:22:10.764157 kubelet[2746]: E1028 13:22:10.764071 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:22:11.761046 kubelet[2746]: E1028 13:22:11.760976 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:11.761890 kubelet[2746]: E1028 13:22:11.761811 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:22:13.761109 kubelet[2746]: E1028 13:22:13.761045 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-rzgz4" podUID="865860d9-b904-4b8f-8efa-543ed6829f69" Oct 28 13:22:14.521454 systemd[1]: Started sshd@21-10.0.0.148:22-10.0.0.1:48138.service - OpenSSH per-connection server daemon (10.0.0.1:48138). Oct 28 13:22:14.601769 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 48138 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:14.602375 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:14.607639 systemd-logind[1580]: New session 22 of user core. Oct 28 13:22:14.617237 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 13:22:14.736218 sshd[5335]: Connection closed by 10.0.0.1 port 48138 Oct 28 13:22:14.736725 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:14.744119 systemd[1]: sshd@21-10.0.0.148:22-10.0.0.1:48138.service: Deactivated successfully. Oct 28 13:22:14.746560 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 13:22:14.747456 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Oct 28 13:22:14.748983 systemd-logind[1580]: Removed session 22. Oct 28 13:22:16.760781 kubelet[2746]: E1028 13:22:16.760736 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:16.761428 kubelet[2746]: E1028 13:22:16.760816 2746 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 13:22:17.761413 containerd[1596]: time="2025-10-28T13:22:17.761351020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 13:22:18.161356 containerd[1596]: time="2025-10-28T13:22:18.161284140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:18.162893 containerd[1596]: time="2025-10-28T13:22:18.162773412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 13:22:18.162893 containerd[1596]: time="2025-10-28T13:22:18.162844989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:18.163212 kubelet[2746]: E1028 13:22:18.163139 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:18.163212 kubelet[2746]: E1028 13:22:18.163204 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 13:22:18.163817 kubelet[2746]: E1028 13:22:18.163306 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:18.165025 containerd[1596]: time="2025-10-28T13:22:18.164627230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 13:22:18.647276 containerd[1596]: time="2025-10-28T13:22:18.647208867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:18.648709 containerd[1596]: time="2025-10-28T13:22:18.648670746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:18.648763 containerd[1596]: time="2025-10-28T13:22:18.648673832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 13:22:18.649026 kubelet[2746]: E1028 13:22:18.648967 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:18.649084 kubelet[2746]: E1028 13:22:18.649046 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 13:22:18.649158 kubelet[2746]: E1028 13:22:18.649134 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d487b5797-djvbp_calico-system(086c3970-45c7-4eae-a705-114504249cb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:18.649227 kubelet[2746]: E1028 13:22:18.649194 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d487b5797-djvbp" podUID="086c3970-45c7-4eae-a705-114504249cb8" Oct 28 13:22:19.749715 systemd[1]: Started sshd@22-10.0.0.148:22-10.0.0.1:35558.service - OpenSSH per-connection server daemon (10.0.0.1:35558). Oct 28 13:22:19.807201 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 35558 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:19.808581 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:19.813119 systemd-logind[1580]: New session 23 of user core. Oct 28 13:22:19.824149 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 13:22:19.947036 sshd[5351]: Connection closed by 10.0.0.1 port 35558 Oct 28 13:22:19.947347 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:19.951558 systemd[1]: sshd@22-10.0.0.148:22-10.0.0.1:35558.service: Deactivated successfully. Oct 28 13:22:19.953826 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 13:22:19.955576 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Oct 28 13:22:19.957326 systemd-logind[1580]: Removed session 23. Oct 28 13:22:21.761581 containerd[1596]: time="2025-10-28T13:22:21.761519027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 13:22:22.124248 containerd[1596]: time="2025-10-28T13:22:22.124192946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:22.125850 containerd[1596]: time="2025-10-28T13:22:22.125732049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 13:22:22.125850 containerd[1596]: time="2025-10-28T13:22:22.125775641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:22.126155 kubelet[2746]: E1028 13:22:22.126061 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:22.126155 kubelet[2746]: E1028 13:22:22.126125 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 13:22:22.127414 kubelet[2746]: E1028 13:22:22.126220 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dgsqm_calico-system(d71c8120-c049-44a0-909a-b94149145773): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:22.127414 kubelet[2746]: E1028 13:22:22.126256 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dgsqm" podUID="d71c8120-c049-44a0-909a-b94149145773" Oct 28 13:22:22.762079 containerd[1596]: time="2025-10-28T13:22:22.762026749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 13:22:23.078812 containerd[1596]: time="2025-10-28T13:22:23.078663498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:23.085905 containerd[1596]: time="2025-10-28T13:22:23.085829935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 13:22:23.085969 containerd[1596]: time="2025-10-28T13:22:23.085913064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:23.086336 kubelet[2746]: E1028 13:22:23.086237 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:23.086336 kubelet[2746]: E1028 13:22:23.086303 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 13:22:23.086548 kubelet[2746]: E1028 13:22:23.086503 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786d85db64-xgd4x_calico-apiserver(8bf88aab-609b-4cc2-80ec-8ab913048df5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:23.086597 kubelet[2746]: E1028 13:22:23.086564 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786d85db64-xgd4x" podUID="8bf88aab-609b-4cc2-80ec-8ab913048df5" Oct 28 13:22:23.086859 containerd[1596]: time="2025-10-28T13:22:23.086819460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 13:22:23.613834 containerd[1596]: time="2025-10-28T13:22:23.613748497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:23.615256 containerd[1596]: time="2025-10-28T13:22:23.615224779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 13:22:23.615357 containerd[1596]: time="2025-10-28T13:22:23.615311304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:23.615525 kubelet[2746]: E1028 13:22:23.615483 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:23.615895 kubelet[2746]: E1028 13:22:23.615536 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 13:22:23.615895 kubelet[2746]: E1028 13:22:23.615640 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:23.617071 containerd[1596]: time="2025-10-28T13:22:23.617026250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 13:22:24.502401 containerd[1596]: time="2025-10-28T13:22:24.502317980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:24.503873 containerd[1596]: time="2025-10-28T13:22:24.503837543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 13:22:24.503937 containerd[1596]: time="2025-10-28T13:22:24.503882880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:24.504180 kubelet[2746]: E1028 13:22:24.504123 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:24.504180 kubelet[2746]: E1028 13:22:24.504174 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 13:22:24.504418 kubelet[2746]: E1028 13:22:24.504260 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6nhgp_calico-system(fce65538-7c69-4e45-8cae-c289c79a1bdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:24.504418 kubelet[2746]: E1028 13:22:24.504301 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6nhgp" podUID="fce65538-7c69-4e45-8cae-c289c79a1bdb" Oct 28 13:22:24.762353 containerd[1596]: time="2025-10-28T13:22:24.761783076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 13:22:24.959775 systemd[1]: Started sshd@23-10.0.0.148:22-10.0.0.1:35564.service - OpenSSH per-connection server daemon (10.0.0.1:35564). Oct 28 13:22:25.029748 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 35564 ssh2: RSA SHA256:7agSn2MrwuqfnOxDCr6f4heAf/pJNgMDdwmEg1eP9yI Oct 28 13:22:25.031167 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 13:22:25.036092 systemd-logind[1580]: New session 24 of user core. Oct 28 13:22:25.042594 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 28 13:22:25.174646 sshd[5369]: Connection closed by 10.0.0.1 port 35564 Oct 28 13:22:25.177756 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Oct 28 13:22:25.178936 containerd[1596]: time="2025-10-28T13:22:25.178757738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 13:22:25.184437 containerd[1596]: time="2025-10-28T13:22:25.184365139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 13:22:25.184544 containerd[1596]: time="2025-10-28T13:22:25.184423010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 28 13:22:25.184730 kubelet[2746]: E1028 13:22:25.184678 2746 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:25.184730 kubelet[2746]: E1028 13:22:25.184732 2746 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 13:22:25.185265 kubelet[2746]: E1028 13:22:25.184813 2746 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5879d58c5c-4p8l2_calico-system(f136d0fd-8c7a-4899-8498-1ed4f8ad5125): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 13:22:25.185265 kubelet[2746]: E1028 13:22:25.184844 2746 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5879d58c5c-4p8l2" podUID="f136d0fd-8c7a-4899-8498-1ed4f8ad5125" Oct 28 13:22:25.186715 systemd[1]: sshd@23-10.0.0.148:22-10.0.0.1:35564.service: Deactivated successfully. Oct 28 13:22:25.189579 systemd[1]: session-24.scope: Deactivated successfully. Oct 28 13:22:25.191264 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Oct 28 13:22:25.193106 systemd-logind[1580]: Removed session 24.