Nov 5 04:46:56.520626 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 03:01:50 -00 2025 Nov 5 04:46:56.520648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:46:56.520660 kernel: BIOS-provided physical RAM map: Nov 5 04:46:56.520667 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 04:46:56.520674 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 04:46:56.520681 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 04:46:56.520689 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 5 04:46:56.520696 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 5 04:46:56.520706 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 04:46:56.520712 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 04:46:56.520722 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 04:46:56.520728 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 04:46:56.520762 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 04:46:56.520770 kernel: NX (Execute Disable) protection: active Nov 5 04:46:56.520778 kernel: APIC: Static calls initialized Nov 5 04:46:56.520789 kernel: SMBIOS 2.8 present. Nov 5 04:46:56.520799 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 5 04:46:56.520807 kernel: DMI: Memory slots populated: 1/1 Nov 5 04:46:56.520814 kernel: Hypervisor detected: KVM Nov 5 04:46:56.520821 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 5 04:46:56.520829 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 04:46:56.520836 kernel: kvm-clock: using sched offset of 3912263471 cycles Nov 5 04:46:56.520844 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 04:46:56.520852 kernel: tsc: Detected 2794.748 MHz processor Nov 5 04:46:56.520862 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 04:46:56.520871 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 04:46:56.520879 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 5 04:46:56.520887 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 04:46:56.520895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 04:46:56.520902 kernel: Using GB pages for direct mapping Nov 5 04:46:56.520910 kernel: ACPI: Early table checksum verification disabled Nov 5 04:46:56.520918 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 5 04:46:56.520928 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520936 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520944 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520952 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 5 04:46:56.520960 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520969 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520978 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.520990 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:46:56.521001 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 5 04:46:56.521009 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 5 04:46:56.521017 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 5 04:46:56.521025 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 5 04:46:56.521035 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 5 04:46:56.521043 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 5 04:46:56.521051 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 5 04:46:56.521059 kernel: No NUMA configuration found Nov 5 04:46:56.521067 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 5 04:46:56.521075 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 5 04:46:56.521093 kernel: Zone ranges: Nov 5 04:46:56.521101 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 04:46:56.521109 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 5 04:46:56.521117 kernel: Normal empty Nov 5 04:46:56.521125 kernel: Device empty Nov 5 04:46:56.521133 kernel: Movable zone start for each node Nov 5 04:46:56.521141 kernel: Early memory node ranges Nov 5 04:46:56.521149 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 04:46:56.521159 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 5 04:46:56.521167 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 5 04:46:56.521175 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 04:46:56.521183 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 04:46:56.521191 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 5 04:46:56.521201 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 04:46:56.521209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 04:46:56.521220 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 04:46:56.521228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 04:46:56.521238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 04:46:56.521246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 04:46:56.521254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 04:46:56.521262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 04:46:56.521270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 04:46:56.521280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 04:46:56.521287 kernel: TSC deadline timer available Nov 5 04:46:56.521295 kernel: CPU topo: Max. logical packages: 1 Nov 5 04:46:56.521303 kernel: CPU topo: Max. logical dies: 1 Nov 5 04:46:56.521311 kernel: CPU topo: Max. dies per package: 1 Nov 5 04:46:56.521319 kernel: CPU topo: Max. threads per core: 1 Nov 5 04:46:56.521327 kernel: CPU topo: Num. cores per package: 4 Nov 5 04:46:56.521335 kernel: CPU topo: Num. threads per package: 4 Nov 5 04:46:56.521344 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 04:46:56.521352 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 04:46:56.521360 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 04:46:56.521368 kernel: kvm-guest: setup PV sched yield Nov 5 04:46:56.521376 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 04:46:56.521384 kernel: Booting paravirtualized kernel on KVM Nov 5 04:46:56.521392 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 04:46:56.521402 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 04:46:56.521410 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 04:46:56.521418 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 04:46:56.521426 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 04:46:56.521434 kernel: kvm-guest: PV spinlocks enabled Nov 5 04:46:56.521442 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 04:46:56.521451 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:46:56.521461 kernel: random: crng init done Nov 5 04:46:56.521469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 04:46:56.521477 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 04:46:56.521485 kernel: Fallback order for Node 0: 0 Nov 5 04:46:56.521493 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 5 04:46:56.521501 kernel: Policy zone: DMA32 Nov 5 04:46:56.521509 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 04:46:56.521519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 04:46:56.521527 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 04:46:56.521535 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 04:46:56.521543 kernel: Dynamic Preempt: voluntary Nov 5 04:46:56.521551 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 04:46:56.521560 kernel: rcu: RCU event tracing is enabled. Nov 5 04:46:56.521568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 04:46:56.521576 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 04:46:56.521588 kernel: Rude variant of Tasks RCU enabled. Nov 5 04:46:56.521596 kernel: Tracing variant of Tasks RCU enabled. Nov 5 04:46:56.521604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 04:46:56.521613 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 04:46:56.521620 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:46:56.521628 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:46:56.521637 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:46:56.521647 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 04:46:56.521655 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 04:46:56.521669 kernel: Console: colour VGA+ 80x25 Nov 5 04:46:56.521679 kernel: printk: legacy console [ttyS0] enabled Nov 5 04:46:56.521688 kernel: ACPI: Core revision 20240827 Nov 5 04:46:56.521696 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 04:46:56.521705 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 04:46:56.521713 kernel: x2apic enabled Nov 5 04:46:56.521721 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 04:46:56.521734 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 04:46:56.521761 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 04:46:56.521770 kernel: kvm-guest: setup PV IPIs Nov 5 04:46:56.521778 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 04:46:56.521789 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 04:46:56.521798 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 04:46:56.521806 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 04:46:56.521814 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 04:46:56.521823 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 04:46:56.521831 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 04:46:56.521839 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 04:46:56.521850 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 04:46:56.521858 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 04:46:56.521866 kernel: active return thunk: retbleed_return_thunk Nov 5 04:46:56.521875 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 04:46:56.521883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 04:46:56.521892 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 04:46:56.521900 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 04:46:56.521911 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 04:46:56.521919 kernel: active return thunk: srso_return_thunk Nov 5 04:46:56.521928 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 04:46:56.521936 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 04:46:56.521944 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 04:46:56.521953 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 04:46:56.521961 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 04:46:56.521971 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 04:46:56.521979 kernel: Freeing SMP alternatives memory: 32K Nov 5 04:46:56.521988 kernel: pid_max: default: 32768 minimum: 301 Nov 5 04:46:56.521996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 04:46:56.522004 kernel: landlock: Up and running. Nov 5 04:46:56.522012 kernel: SELinux: Initializing. Nov 5 04:46:56.522023 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:46:56.522033 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:46:56.522042 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 04:46:56.522050 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 04:46:56.522059 kernel: ... version: 0 Nov 5 04:46:56.522067 kernel: ... bit width: 48 Nov 5 04:46:56.522075 kernel: ... generic registers: 6 Nov 5 04:46:56.522090 kernel: ... value mask: 0000ffffffffffff Nov 5 04:46:56.522101 kernel: ... max period: 00007fffffffffff Nov 5 04:46:56.522109 kernel: ... fixed-purpose events: 0 Nov 5 04:46:56.522118 kernel: ... event mask: 000000000000003f Nov 5 04:46:56.522126 kernel: signal: max sigframe size: 1776 Nov 5 04:46:56.522135 kernel: rcu: Hierarchical SRCU implementation. Nov 5 04:46:56.522143 kernel: rcu: Max phase no-delay instances is 400. Nov 5 04:46:56.522151 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 04:46:56.522162 kernel: smp: Bringing up secondary CPUs ... Nov 5 04:46:56.522170 kernel: smpboot: x86: Booting SMP configuration: Nov 5 04:46:56.522178 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 04:46:56.522186 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 04:46:56.522194 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 04:46:56.522203 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 118472K reserved, 0K cma-reserved) Nov 5 04:46:56.522211 kernel: devtmpfs: initialized Nov 5 04:46:56.522220 kernel: x86/mm: Memory block size: 128MB Nov 5 04:46:56.522230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 04:46:56.522239 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 04:46:56.522247 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 04:46:56.522255 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 04:46:56.522263 kernel: audit: initializing netlink subsys (disabled) Nov 5 04:46:56.522272 kernel: audit: type=2000 audit(1762318013.789:1): state=initialized audit_enabled=0 res=1 Nov 5 04:46:56.522280 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 04:46:56.522290 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 04:46:56.522298 kernel: cpuidle: using governor menu Nov 5 04:46:56.522307 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 04:46:56.522315 kernel: dca service started, version 1.12.1 Nov 5 04:46:56.522323 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 04:46:56.522332 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 04:46:56.522340 kernel: PCI: Using configuration type 1 for base access Nov 5 04:46:56.522350 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 04:46:56.522358 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 04:46:56.522367 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 04:46:56.522375 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 04:46:56.522383 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 04:46:56.522392 kernel: ACPI: Added _OSI(Module Device) Nov 5 04:46:56.522400 kernel: ACPI: Added _OSI(Processor Device) Nov 5 04:46:56.522410 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 04:46:56.522418 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 04:46:56.522427 kernel: ACPI: Interpreter enabled Nov 5 04:46:56.522435 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 04:46:56.522443 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 04:46:56.522451 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 04:46:56.522460 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 04:46:56.522470 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 04:46:56.522478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 04:46:56.522750 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 04:46:56.522940 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 04:46:56.523126 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 04:46:56.523137 kernel: PCI host bridge to bus 0000:00 Nov 5 04:46:56.523336 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 04:46:56.523501 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 04:46:56.523660 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 04:46:56.523848 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 5 04:46:56.524009 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 04:46:56.524176 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 5 04:46:56.524341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 04:46:56.524536 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 04:46:56.524723 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 04:46:56.524919 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 04:46:56.525122 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 04:46:56.525300 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 04:46:56.525490 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 04:46:56.525678 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 04:46:56.525890 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 5 04:46:56.526067 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 04:46:56.526249 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 04:46:56.526438 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 04:46:56.526613 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 5 04:46:56.526806 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 04:46:56.526983 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 04:46:56.527175 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 04:46:56.527354 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 5 04:46:56.527527 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 5 04:46:56.527698 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 5 04:46:56.527905 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 04:46:56.528097 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 04:46:56.528272 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 04:46:56.528456 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 04:46:56.528628 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 5 04:46:56.528818 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 5 04:46:56.529002 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 04:46:56.529182 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 04:46:56.529218 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 04:46:56.529238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 04:46:56.529256 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 04:46:56.529275 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 04:46:56.529294 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 04:46:56.529320 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 04:46:56.529329 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 04:46:56.529341 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 04:46:56.529360 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 04:46:56.529368 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 04:46:56.529394 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 04:46:56.529412 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 04:46:56.529421 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 04:46:56.529429 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 04:46:56.529441 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 04:46:56.529450 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 04:46:56.529458 kernel: iommu: Default domain type: Translated Nov 5 04:46:56.529466 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 04:46:56.529475 kernel: PCI: Using ACPI for IRQ routing Nov 5 04:46:56.529483 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 04:46:56.529491 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 04:46:56.529500 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 5 04:46:56.529681 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 04:46:56.529874 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 04:46:56.530046 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 04:46:56.530057 kernel: vgaarb: loaded Nov 5 04:46:56.530065 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 04:46:56.530074 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 04:46:56.530094 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 04:46:56.530103 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 04:46:56.530112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 04:46:56.530121 kernel: pnp: PnP ACPI init Nov 5 04:46:56.530305 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 04:46:56.530318 kernel: pnp: PnP ACPI: found 6 devices Nov 5 04:46:56.530327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 04:46:56.530339 kernel: NET: Registered PF_INET protocol family Nov 5 04:46:56.530348 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 04:46:56.530356 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 04:46:56.530365 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 04:46:56.530373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 04:46:56.530382 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 04:46:56.530390 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 04:46:56.530401 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:46:56.530409 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:46:56.530418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 04:46:56.530426 kernel: NET: Registered PF_XDP protocol family Nov 5 04:46:56.530587 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 04:46:56.530764 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 04:46:56.530926 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 04:46:56.531098 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 5 04:46:56.531258 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 04:46:56.531416 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 5 04:46:56.531427 kernel: PCI: CLS 0 bytes, default 64 Nov 5 04:46:56.531436 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 04:46:56.531445 kernel: Initialise system trusted keyrings Nov 5 04:46:56.531457 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 04:46:56.531465 kernel: Key type asymmetric registered Nov 5 04:46:56.531473 kernel: Asymmetric key parser 'x509' registered Nov 5 04:46:56.531482 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 04:46:56.531490 kernel: io scheduler mq-deadline registered Nov 5 04:46:56.531499 kernel: io scheduler kyber registered Nov 5 04:46:56.531507 kernel: io scheduler bfq registered Nov 5 04:46:56.531515 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 04:46:56.531526 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 04:46:56.531535 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 04:46:56.531543 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 04:46:56.531551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 04:46:56.531560 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 04:46:56.531568 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 04:46:56.531577 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 04:46:56.531587 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 04:46:56.531786 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 04:46:56.531799 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 04:46:56.531967 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 04:46:56.532141 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T04:46:54 UTC (1762318014) Nov 5 04:46:56.532305 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 04:46:56.532320 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 04:46:56.532329 kernel: NET: Registered PF_INET6 protocol family Nov 5 04:46:56.532337 kernel: Segment Routing with IPv6 Nov 5 04:46:56.532345 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 04:46:56.532354 kernel: NET: Registered PF_PACKET protocol family Nov 5 04:46:56.532362 kernel: Key type dns_resolver registered Nov 5 04:46:56.532370 kernel: IPI shorthand broadcast: enabled Nov 5 04:46:56.532381 kernel: sched_clock: Marking stable (1700003406, 214478707)->(2042727564, -128245451) Nov 5 04:46:56.532389 kernel: registered taskstats version 1 Nov 5 04:46:56.532397 kernel: Loading compiled-in X.509 certificates Nov 5 04:46:56.532406 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cfd469c5acf75e2b7be33dd554bbf88cbfe73c93' Nov 5 04:46:56.532414 kernel: Demotion targets for Node 0: null Nov 5 04:46:56.532423 kernel: Key type .fscrypt registered Nov 5 04:46:56.532431 kernel: Key type fscrypt-provisioning registered Nov 5 04:46:56.532441 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 04:46:56.532449 kernel: ima: Allocated hash algorithm: sha1 Nov 5 04:46:56.532458 kernel: ima: No architecture policies found Nov 5 04:46:56.532466 kernel: clk: Disabling unused clocks Nov 5 04:46:56.532474 kernel: Freeing unused kernel image (initmem) memory: 15348K Nov 5 04:46:56.532482 kernel: Write protecting the kernel read-only data: 45056k Nov 5 04:46:56.532491 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 5 04:46:56.532501 kernel: Run /init as init process Nov 5 04:46:56.532509 kernel: with arguments: Nov 5 04:46:56.532517 kernel: /init Nov 5 04:46:56.532525 kernel: with environment: Nov 5 04:46:56.532533 kernel: HOME=/ Nov 5 04:46:56.532541 kernel: TERM=linux Nov 5 04:46:56.532550 kernel: SCSI subsystem initialized Nov 5 04:46:56.532558 kernel: libata version 3.00 loaded. Nov 5 04:46:56.532750 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 04:46:56.532779 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 04:46:56.533355 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 04:46:56.533712 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 04:46:56.536916 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 04:46:56.537128 kernel: scsi host0: ahci Nov 5 04:46:56.537321 kernel: scsi host1: ahci Nov 5 04:46:56.537507 kernel: scsi host2: ahci Nov 5 04:46:56.537715 kernel: scsi host3: ahci Nov 5 04:46:56.537964 kernel: scsi host4: ahci Nov 5 04:46:56.538173 kernel: scsi host5: ahci Nov 5 04:46:56.538191 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 5 04:46:56.538200 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 5 04:46:56.538209 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 5 04:46:56.538218 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 5 04:46:56.538227 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 5 04:46:56.538236 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 5 04:46:56.538247 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 04:46:56.538256 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 04:46:56.538265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 04:46:56.538273 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 04:46:56.538283 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 04:46:56.538291 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 04:46:56.538300 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:46:56.538311 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 04:46:56.538320 kernel: ata3.00: applying bridge limits Nov 5 04:46:56.538328 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:46:56.538337 kernel: ata3.00: configured for UDMA/100 Nov 5 04:46:56.538543 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 04:46:56.538760 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 04:46:56.538941 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 04:46:56.538957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 04:46:56.538966 kernel: GPT:16515071 != 27000831 Nov 5 04:46:56.538975 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 04:46:56.538984 kernel: GPT:16515071 != 27000831 Nov 5 04:46:56.538992 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 04:46:56.539001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 04:46:56.539205 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 04:46:56.539217 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 04:46:56.539409 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 04:46:56.539421 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 04:46:56.539433 kernel: device-mapper: uevent: version 1.0.3 Nov 5 04:46:56.539442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 04:46:56.539451 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 04:46:56.539462 kernel: raid6: avx2x4 gen() 30675 MB/s Nov 5 04:46:56.539471 kernel: raid6: avx2x2 gen() 31145 MB/s Nov 5 04:46:56.539479 kernel: raid6: avx2x1 gen() 25568 MB/s Nov 5 04:46:56.539488 kernel: raid6: using algorithm avx2x2 gen() 31145 MB/s Nov 5 04:46:56.539496 kernel: raid6: .... xor() 20011 MB/s, rmw enabled Nov 5 04:46:56.539507 kernel: raid6: using avx2x2 recovery algorithm Nov 5 04:46:56.539516 kernel: xor: automatically using best checksumming function avx Nov 5 04:46:56.539525 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 04:46:56.539534 kernel: BTRFS: device fsid 8119ddf0-7fda-4d84-ad78-3566733896c1 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Nov 5 04:46:56.539543 kernel: BTRFS info (device dm-0): first mount of filesystem 8119ddf0-7fda-4d84-ad78-3566733896c1 Nov 5 04:46:56.539551 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:46:56.539560 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 04:46:56.539571 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 04:46:56.539579 kernel: loop: module loaded Nov 5 04:46:56.539588 kernel: loop0: detected capacity change from 0 to 100136 Nov 5 04:46:56.539596 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 04:46:56.539606 systemd[1]: Successfully made /usr/ read-only. Nov 5 04:46:56.539618 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:46:56.539630 systemd[1]: Detected virtualization kvm. Nov 5 04:46:56.539639 systemd[1]: Detected architecture x86-64. Nov 5 04:46:56.539648 systemd[1]: Running in initrd. Nov 5 04:46:56.539657 systemd[1]: No hostname configured, using default hostname. Nov 5 04:46:56.539667 systemd[1]: Hostname set to . Nov 5 04:46:56.539676 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:46:56.539687 systemd[1]: Queued start job for default target initrd.target. Nov 5 04:46:56.539696 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:46:56.539705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:46:56.539714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:46:56.539724 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 04:46:56.539733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:46:56.539761 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 04:46:56.539771 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 04:46:56.539780 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:46:56.539791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:46:56.539800 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:46:56.539810 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:46:56.539821 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:46:56.539830 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:46:56.539839 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:46:56.539848 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:46:56.539857 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:46:56.539867 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 04:46:56.539876 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 04:46:56.539887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:46:56.539897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:46:56.539906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:46:56.539915 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:46:56.539924 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 04:46:56.539934 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 04:46:56.539943 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:46:56.539954 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 04:46:56.539964 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 04:46:56.539973 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 04:46:56.539982 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:46:56.539991 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:46:56.540001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:46:56.540015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 04:46:56.540025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:46:56.540035 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 04:46:56.540045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 04:46:56.540096 systemd-journald[317]: Collecting audit messages is disabled. Nov 5 04:46:56.540118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 04:46:56.540128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 04:46:56.540140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:46:56.540151 systemd-journald[317]: Journal started Nov 5 04:46:56.540171 systemd-journald[317]: Runtime Journal (/run/log/journal/33d7e9fa2a864774957383c7cc9ea4c2) is 6M, max 48.2M, 42.2M free. Nov 5 04:46:56.545297 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:46:56.549780 kernel: Bridge firewalling registered Nov 5 04:46:56.549344 systemd-modules-load[319]: Inserted module 'br_netfilter' Nov 5 04:46:56.559189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:46:56.619043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:46:56.634072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:46:56.638090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:46:56.641603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 04:46:56.642452 systemd-tmpfiles[337]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 04:46:56.644725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:46:56.656978 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:46:56.671256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:46:56.673818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:46:56.688887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:46:56.691117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 04:46:56.712214 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:46:56.745005 systemd-resolved[357]: Positive Trust Anchors: Nov 5 04:46:56.745020 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:46:56.745024 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:46:56.745055 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:46:56.760779 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 5 04:46:56.765812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:46:56.766665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:46:56.850787 kernel: Loading iSCSI transport class v2.0-870. Nov 5 04:46:56.864773 kernel: iscsi: registered transport (tcp) Nov 5 04:46:56.888053 kernel: iscsi: registered transport (qla4xxx) Nov 5 04:46:56.888128 kernel: QLogic iSCSI HBA Driver Nov 5 04:46:56.915926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:46:56.948497 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:46:56.949512 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:46:57.012559 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 04:46:57.015866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 04:46:57.018114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 04:46:57.052935 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:46:57.056534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:46:57.095360 systemd-udevd[599]: Using default interface naming scheme 'v257'. Nov 5 04:46:57.109274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:46:57.114976 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 04:46:57.142088 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:46:57.145898 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:46:57.149894 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Nov 5 04:46:57.183003 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:46:57.186292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:46:57.197002 systemd-networkd[710]: lo: Link UP Nov 5 04:46:57.197011 systemd-networkd[710]: lo: Gained carrier Nov 5 04:46:57.199914 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:46:57.201897 systemd[1]: Reached target network.target - Network. Nov 5 04:46:57.275690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:46:57.281599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 04:46:57.340027 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 04:46:57.359087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 04:46:57.369757 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 04:46:57.372170 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 04:46:57.398768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 04:46:57.401312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:46:57.404380 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:46:57.404385 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:46:57.405612 systemd-networkd[710]: eth0: Link UP Nov 5 04:46:57.418871 kernel: AES CTR mode by8 optimization enabled Nov 5 04:46:57.405846 systemd-networkd[710]: eth0: Gained carrier Nov 5 04:46:57.405856 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:46:57.409432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 04:46:57.412049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:46:57.412174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:46:57.415111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:46:57.425615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:46:57.429954 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:46:57.443971 disk-uuid[818]: Primary Header is updated. Nov 5 04:46:57.443971 disk-uuid[818]: Secondary Entries is updated. Nov 5 04:46:57.443971 disk-uuid[818]: Secondary Header is updated. Nov 5 04:46:57.535257 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 04:46:57.538641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:46:57.541709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:46:57.544104 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:46:57.545986 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:46:57.550564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 04:46:57.577512 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:46:58.483870 disk-uuid[837]: Warning: The kernel is still using the old partition table. Nov 5 04:46:58.483870 disk-uuid[837]: The new table will be used at the next reboot or after you Nov 5 04:46:58.483870 disk-uuid[837]: run partprobe(8) or kpartx(8) Nov 5 04:46:58.483870 disk-uuid[837]: The operation has completed successfully. Nov 5 04:46:58.500179 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 04:46:58.500333 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 04:46:58.504819 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 04:46:58.554770 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Nov 5 04:46:58.558038 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:46:58.558060 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:46:58.561830 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:46:58.561850 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:46:58.569766 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:46:58.570838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 04:46:58.574126 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 04:46:59.210791 systemd-networkd[710]: eth0: Gained IPv6LL Nov 5 04:46:59.225711 ignition[885]: Ignition 2.22.0 Nov 5 04:46:59.225726 ignition[885]: Stage: fetch-offline Nov 5 04:46:59.225818 ignition[885]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:46:59.225833 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:46:59.226948 ignition[885]: parsed url from cmdline: "" Nov 5 04:46:59.226954 ignition[885]: no config URL provided Nov 5 04:46:59.226961 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 04:46:59.227842 ignition[885]: no config at "/usr/lib/ignition/user.ign" Nov 5 04:46:59.227900 ignition[885]: op(1): [started] loading QEMU firmware config module Nov 5 04:46:59.227907 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 04:46:59.241492 ignition[885]: op(1): [finished] loading QEMU firmware config module Nov 5 04:46:59.322191 ignition[885]: parsing config with SHA512: f7ba7eb9a8ef22d51b2df6a0b9aecf48fd1ac726cc4bf244a0348e234ce97b26a928614b7df5a0cd4a740378be7fa1e228e11bb64506e9abc0c72cf552e764e6 Nov 5 04:46:59.327598 unknown[885]: fetched base config from "system" Nov 5 04:46:59.327621 unknown[885]: fetched user config from "qemu" Nov 5 04:46:59.328196 ignition[885]: fetch-offline: fetch-offline passed Nov 5 04:46:59.328288 ignition[885]: Ignition finished successfully Nov 5 04:46:59.335731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:46:59.336638 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 04:46:59.337854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 04:46:59.402718 ignition[895]: Ignition 2.22.0 Nov 5 04:46:59.402733 ignition[895]: Stage: kargs Nov 5 04:46:59.402883 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:46:59.402908 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:46:59.408217 ignition[895]: kargs: kargs passed Nov 5 04:46:59.408270 ignition[895]: Ignition finished successfully Nov 5 04:46:59.414471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 04:46:59.418664 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 04:46:59.490643 ignition[903]: Ignition 2.22.0 Nov 5 04:46:59.490659 ignition[903]: Stage: disks Nov 5 04:46:59.490854 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:46:59.490868 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:46:59.497411 ignition[903]: disks: disks passed Nov 5 04:46:59.498564 ignition[903]: Ignition finished successfully Nov 5 04:46:59.503165 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 04:46:59.506494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 04:46:59.507166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 04:46:59.510669 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:46:59.514468 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:46:59.517503 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:46:59.524617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 04:46:59.574598 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 04:46:59.583102 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 04:46:59.585467 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 04:46:59.704852 kernel: EXT4-fs (vda9): mounted filesystem d6ba737d-b2ad-4de6-9309-ffb105e40987 r/w with ordered data mode. Quota mode: none. Nov 5 04:46:59.705834 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 04:46:59.709762 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 04:46:59.715340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:46:59.719061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 04:46:59.722054 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 04:46:59.722099 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 04:46:59.722127 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:46:59.735458 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 04:46:59.740360 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 04:46:59.747456 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Nov 5 04:46:59.747484 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:46:59.747501 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:46:59.751379 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:46:59.751446 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:46:59.753366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:46:59.797287 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 04:46:59.803691 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Nov 5 04:46:59.809884 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 04:46:59.814714 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 04:46:59.923690 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 04:46:59.926577 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 04:46:59.931650 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 04:46:59.950341 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 04:46:59.953011 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:46:59.966927 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 04:47:00.098110 ignition[1035]: INFO : Ignition 2.22.0 Nov 5 04:47:00.098110 ignition[1035]: INFO : Stage: mount Nov 5 04:47:00.101059 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:47:00.101059 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:47:00.101059 ignition[1035]: INFO : mount: mount passed Nov 5 04:47:00.101059 ignition[1035]: INFO : Ignition finished successfully Nov 5 04:47:00.102687 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 04:47:00.107449 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 04:47:00.707803 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:47:00.736764 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1047) Nov 5 04:47:00.739920 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:47:00.739934 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:47:00.743593 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:47:00.743610 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:47:00.745470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:47:00.809604 ignition[1064]: INFO : Ignition 2.22.0 Nov 5 04:47:00.809604 ignition[1064]: INFO : Stage: files Nov 5 04:47:00.812393 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:47:00.812393 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:47:00.812393 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Nov 5 04:47:00.812393 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 04:47:00.812393 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 04:47:00.822595 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 04:47:00.822595 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 04:47:00.822595 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 04:47:00.822595 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:47:00.822595 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 04:47:00.815914 unknown[1064]: wrote ssh authorized keys file for user: core Nov 5 04:47:00.972260 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 04:47:01.032871 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:47:01.036201 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:47:01.058882 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 04:47:01.572112 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 04:47:02.658111 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 04:47:02.658111 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 04:47:02.664610 ignition[1064]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 04:47:02.687678 ignition[1064]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:47:02.691097 ignition[1064]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:47:02.693866 ignition[1064]: INFO : files: files passed Nov 5 04:47:02.693866 ignition[1064]: INFO : Ignition finished successfully Nov 5 04:47:02.695233 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 04:47:02.701444 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 04:47:02.704073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 04:47:02.727567 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 04:47:02.727710 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 04:47:02.735925 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 04:47:02.741184 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:47:02.741184 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:47:02.748307 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:47:02.743482 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:47:02.745116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 04:47:02.750138 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 04:47:02.809444 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 04:47:02.809582 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 04:47:02.810892 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 04:47:02.815654 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 04:47:02.821785 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 04:47:02.823026 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 04:47:02.844269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:47:02.846954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 04:47:02.872498 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:47:02.872803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:47:02.873692 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:47:02.874271 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 04:47:02.882209 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 04:47:02.882343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:47:02.887703 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 04:47:02.888588 systemd[1]: Stopped target basic.target - Basic System. Nov 5 04:47:02.893349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 04:47:02.895866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:47:02.896364 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 04:47:02.897219 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:47:02.905844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 04:47:02.909205 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:47:02.912283 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 04:47:02.916312 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 04:47:02.919405 systemd[1]: Stopped target swap.target - Swaps. Nov 5 04:47:02.922196 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 04:47:02.922315 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:47:02.927612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:47:02.928524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:47:02.933272 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 04:47:02.936462 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:47:02.940096 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 04:47:02.940223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 04:47:02.945511 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 04:47:02.945640 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:47:02.948990 systemd[1]: Stopped target paths.target - Path Units. Nov 5 04:47:02.950213 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 04:47:02.956930 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:47:02.961488 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 04:47:02.962232 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 04:47:02.962798 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 04:47:02.962926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:47:02.967768 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 04:47:02.967857 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:47:02.970520 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 04:47:02.970653 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:47:02.973490 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 04:47:02.973605 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 04:47:02.981112 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 04:47:02.981722 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 04:47:02.981890 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:47:03.002562 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 04:47:03.003270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 04:47:03.003419 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:47:03.006421 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 04:47:03.006563 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:47:03.009645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 04:47:03.009794 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:47:03.023141 ignition[1121]: INFO : Ignition 2.22.0 Nov 5 04:47:03.023141 ignition[1121]: INFO : Stage: umount Nov 5 04:47:03.026110 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:47:03.026110 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:47:03.026110 ignition[1121]: INFO : umount: umount passed Nov 5 04:47:03.026110 ignition[1121]: INFO : Ignition finished successfully Nov 5 04:47:03.027234 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 04:47:03.034703 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 04:47:03.039315 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 04:47:03.039447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 04:47:03.044538 systemd[1]: Stopped target network.target - Network. Nov 5 04:47:03.045274 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 04:47:03.045367 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 04:47:03.045836 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 04:47:03.045893 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 04:47:03.046371 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 04:47:03.046432 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 04:47:03.046949 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 04:47:03.047004 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 04:47:03.047954 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 04:47:03.049285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 04:47:03.073350 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 04:47:03.074035 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 04:47:03.074164 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 04:47:03.079539 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 04:47:03.079695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 04:47:03.087603 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 04:47:03.088677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 04:47:03.088761 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:47:03.096571 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 04:47:03.097234 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 04:47:03.097301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:47:03.100260 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 04:47:03.100315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:47:03.100764 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 04:47:03.100817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 04:47:03.101343 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:47:03.131672 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 04:47:03.131907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:47:03.135839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 04:47:03.135889 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 04:47:03.138360 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 04:47:03.138401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:47:03.147209 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 04:47:03.147300 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:47:03.153504 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 04:47:03.153592 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 04:47:03.158197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 04:47:03.158274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:47:03.164809 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 04:47:03.165442 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 04:47:03.165504 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:47:03.169255 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 04:47:03.169327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:47:03.172507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:47:03.172564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:47:03.174015 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 04:47:03.174134 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 04:47:03.196496 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 04:47:03.197692 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 04:47:03.202719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 04:47:03.202914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 04:47:03.204162 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 04:47:03.204309 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 04:47:03.212618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 04:47:03.214732 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 04:47:03.235681 systemd[1]: Switching root. Nov 5 04:47:03.279244 systemd-journald[317]: Journal stopped Nov 5 04:47:04.948557 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 5 04:47:04.948646 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 04:47:04.948669 kernel: SELinux: policy capability open_perms=1 Nov 5 04:47:04.948691 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 04:47:04.948703 kernel: SELinux: policy capability always_check_network=0 Nov 5 04:47:04.948721 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 04:47:04.948735 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 04:47:04.948761 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 04:47:04.948778 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 04:47:04.948790 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 04:47:04.948810 kernel: audit: type=1403 audit(1762318023.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 04:47:04.948831 systemd[1]: Successfully loaded SELinux policy in 73.353ms. Nov 5 04:47:04.948864 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.469ms. Nov 5 04:47:04.948879 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:47:04.948892 systemd[1]: Detected virtualization kvm. Nov 5 04:47:04.948910 systemd[1]: Detected architecture x86-64. Nov 5 04:47:04.948926 systemd[1]: Detected first boot. Nov 5 04:47:04.948946 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:47:04.948960 zram_generator::config[1166]: No configuration found. Nov 5 04:47:04.949318 kernel: Guest personality initialized and is inactive Nov 5 04:47:04.949331 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 04:47:04.949343 kernel: Initialized host personality Nov 5 04:47:04.949355 kernel: NET: Registered PF_VSOCK protocol family Nov 5 04:47:04.949368 systemd[1]: Populated /etc with preset unit settings. Nov 5 04:47:04.949390 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 04:47:04.949405 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 04:47:04.949418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 04:47:04.949431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 04:47:04.949445 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 04:47:04.949458 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 04:47:04.949475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 04:47:04.949488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 04:47:04.949501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 04:47:04.949516 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 04:47:04.949529 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 04:47:04.949544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:47:04.949559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:47:04.949577 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 04:47:04.949590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 04:47:04.949603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 04:47:04.949621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:47:04.949634 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 04:47:04.949647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:47:04.949669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:47:04.949682 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 04:47:04.949696 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 04:47:04.949709 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 04:47:04.949725 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 04:47:04.949831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:47:04.949856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:47:04.949872 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:47:04.949893 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:47:04.949908 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 04:47:04.949922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 04:47:04.949937 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 04:47:04.949949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:47:04.949962 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:47:04.949978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:47:04.949997 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 04:47:04.950010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 04:47:04.950023 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 04:47:04.950035 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 04:47:04.950051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:04.950064 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 04:47:04.950077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 04:47:04.950094 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 04:47:04.950109 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 04:47:04.950128 systemd[1]: Reached target machines.target - Containers. Nov 5 04:47:04.950141 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 04:47:04.950155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:47:04.950167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:47:04.950185 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 04:47:04.950198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:47:04.950214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:47:04.950233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:47:04.950248 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 04:47:04.950267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:47:04.950280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 04:47:04.950297 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 04:47:04.950311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 04:47:04.950323 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 04:47:04.950338 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 04:47:04.950354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:47:04.950367 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:47:04.950380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:47:04.950397 kernel: ACPI: bus type drm_connector registered Nov 5 04:47:04.950411 kernel: fuse: init (API version 7.41) Nov 5 04:47:04.950427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:47:04.950440 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 04:47:04.950453 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 04:47:04.950471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:47:04.950485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:04.950498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 04:47:04.950533 systemd-journald[1251]: Collecting audit messages is disabled. Nov 5 04:47:04.950557 systemd-journald[1251]: Journal started Nov 5 04:47:04.950584 systemd-journald[1251]: Runtime Journal (/run/log/journal/33d7e9fa2a864774957383c7cc9ea4c2) is 6M, max 48.2M, 42.2M free. Nov 5 04:47:04.624675 systemd[1]: Queued start job for default target multi-user.target. Nov 5 04:47:04.648259 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 04:47:04.648891 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 04:47:04.952773 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:47:04.955043 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 04:47:04.956919 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 04:47:04.958697 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 04:47:04.960540 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 04:47:04.962418 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 04:47:04.964309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 04:47:04.966518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:47:04.968820 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 04:47:04.969051 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 04:47:04.971262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:47:04.971493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:47:04.973590 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:47:04.973851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:47:04.976068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:47:04.976303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:47:04.978532 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 04:47:04.978763 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 04:47:04.980793 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:47:04.981036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:47:04.983116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:47:04.985333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:47:04.988399 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 04:47:04.990825 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 04:47:05.006642 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:47:05.009173 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 04:47:05.012448 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 04:47:05.015254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 04:47:05.017041 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 04:47:05.017133 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:47:05.019722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 04:47:05.021971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:47:05.026884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 04:47:05.038136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 04:47:05.040137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:47:05.041487 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 04:47:05.043506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:47:05.044706 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:47:05.046653 systemd-journald[1251]: Time spent on flushing to /var/log/journal/33d7e9fa2a864774957383c7cc9ea4c2 is 14.127ms for 962 entries. Nov 5 04:47:05.046653 systemd-journald[1251]: System Journal (/var/log/journal/33d7e9fa2a864774957383c7cc9ea4c2) is 8M, max 163.5M, 155.5M free. Nov 5 04:47:05.320718 systemd-journald[1251]: Received client request to flush runtime journal. Nov 5 04:47:05.320839 kernel: loop1: detected capacity change from 0 to 111544 Nov 5 04:47:05.320885 kernel: loop2: detected capacity change from 0 to 229808 Nov 5 04:47:05.320907 kernel: loop3: detected capacity change from 0 to 119080 Nov 5 04:47:05.320934 kernel: loop4: detected capacity change from 0 to 111544 Nov 5 04:47:05.049885 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 04:47:05.053862 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 04:47:05.056629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:47:05.059903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 04:47:05.085178 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 04:47:05.157732 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:47:05.307205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 04:47:05.312441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:47:05.316865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:47:05.321249 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 04:47:05.324364 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 04:47:05.330389 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 04:47:05.331768 kernel: loop5: detected capacity change from 0 to 229808 Nov 5 04:47:05.334466 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 04:47:05.338902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 04:47:05.349767 kernel: loop6: detected capacity change from 0 to 119080 Nov 5 04:47:05.355112 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 5 04:47:05.355129 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 5 04:47:05.364542 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 04:47:05.366529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:47:05.370982 (sd-merge)[1297]: Merged extensions into '/usr'. Nov 5 04:47:05.446844 systemd[1]: Reload requested from client PID 1285 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 04:47:05.447024 systemd[1]: Reloading... Nov 5 04:47:05.610786 zram_generator::config[1373]: No configuration found. Nov 5 04:47:05.621866 systemd-resolved[1298]: Positive Trust Anchors: Nov 5 04:47:05.621883 systemd-resolved[1298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:47:05.621889 systemd-resolved[1298]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:47:05.621921 systemd-resolved[1298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:47:05.629295 systemd-resolved[1298]: Defaulting to hostname 'linux'. Nov 5 04:47:05.765017 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 04:47:05.765460 systemd[1]: Reloading finished in 318 ms. Nov 5 04:47:05.794582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 04:47:05.796670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:47:05.798788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 04:47:05.801215 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 04:47:05.806710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:47:05.825599 systemd[1]: Starting ensure-sysext.service... Nov 5 04:47:05.828168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:47:05.851812 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Nov 5 04:47:05.851842 systemd[1]: Reloading... Nov 5 04:47:05.905914 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 04:47:05.906852 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 04:47:05.907536 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 04:47:05.909230 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 04:47:05.910326 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 04:47:05.910865 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 5 04:47:05.911012 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 5 04:47:05.920355 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:47:05.921883 systemd-tmpfiles[1379]: Skipping /boot Nov 5 04:47:05.936444 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:47:05.936552 systemd-tmpfiles[1379]: Skipping /boot Nov 5 04:47:05.937776 zram_generator::config[1411]: No configuration found. Nov 5 04:47:06.126924 systemd[1]: Reloading finished in 274 ms. Nov 5 04:47:06.143397 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 04:47:06.168152 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:47:06.179438 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:47:06.182769 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 04:47:06.195384 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 04:47:06.198564 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 04:47:06.206956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:47:06.213134 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 04:47:06.218629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:06.218931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:47:06.227827 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:47:06.232094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:47:06.238115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:47:06.240944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:47:06.241069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:47:06.241172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:06.242734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:47:06.247254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:47:06.270220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 04:47:06.273102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:47:06.273362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:47:06.276292 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:47:06.276533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:47:06.281429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 04:47:06.284883 augenrules[1476]: No rules Nov 5 04:47:06.286367 systemd-udevd[1453]: Using default interface naming scheme 'v257'. Nov 5 04:47:06.287251 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:47:06.287530 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:47:06.298947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 04:47:06.307496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:06.310951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:47:06.312577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:47:06.314943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:47:06.318384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:47:06.328488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:47:06.333029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:47:06.334921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:47:06.335043 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:47:06.335179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 04:47:06.335267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:47:06.336595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:47:06.339242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:47:06.344963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:47:06.347725 augenrules[1488]: /sbin/augenrules: No change Nov 5 04:47:06.348596 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:47:06.348890 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:47:06.352200 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:47:06.352424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:47:06.359079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:47:06.360432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:47:06.361670 augenrules[1530]: No rules Nov 5 04:47:06.363451 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:47:06.364006 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:47:06.367023 systemd[1]: Finished ensure-sysext.service. Nov 5 04:47:06.378968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:47:06.380793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:47:06.380874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:47:06.382492 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 04:47:06.474928 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 04:47:06.591804 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 04:47:06.597773 kernel: ACPI: button: Power Button [PWRF] Nov 5 04:47:06.605631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:47:06.616303 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 04:47:06.610011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 04:47:06.615421 systemd-networkd[1539]: lo: Link UP Nov 5 04:47:06.615426 systemd-networkd[1539]: lo: Gained carrier Nov 5 04:47:06.617162 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:47:06.619168 systemd[1]: Reached target network.target - Network. Nov 5 04:47:06.622577 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 04:47:06.625785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 04:47:06.639901 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 04:47:06.643140 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 04:47:06.658072 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 04:47:06.669278 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 04:47:06.669641 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 04:47:06.674188 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 04:47:06.707403 systemd-networkd[1539]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:47:06.707420 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:47:06.708054 systemd-networkd[1539]: eth0: Link UP Nov 5 04:47:06.708390 systemd-networkd[1539]: eth0: Gained carrier Nov 5 04:47:06.708405 systemd-networkd[1539]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:47:06.718832 systemd-networkd[1539]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:47:06.719695 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 04:47:07.912032 systemd-timesyncd[1540]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 04:47:07.912081 systemd-timesyncd[1540]: Initial clock synchronization to Wed 2025-11-05 04:47:07.911810 UTC. Nov 5 04:47:07.912781 systemd-resolved[1298]: Clock change detected. Flushing caches. Nov 5 04:47:08.005795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:47:08.103875 kernel: kvm_amd: TSC scaling supported Nov 5 04:47:08.103957 kernel: kvm_amd: Nested Virtualization enabled Nov 5 04:47:08.103996 kernel: kvm_amd: Nested Paging enabled Nov 5 04:47:08.105673 kernel: kvm_amd: LBR virtualization supported Nov 5 04:47:08.105719 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 04:47:08.107364 kernel: kvm_amd: Virtual GIF supported Nov 5 04:47:08.135009 kernel: EDAC MC: Ver: 3.0.0 Nov 5 04:47:08.188637 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 04:47:08.195464 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 04:47:08.245302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:47:08.252649 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 04:47:08.340260 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 04:47:08.342442 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:47:08.344401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 04:47:08.346566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 04:47:08.348672 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 04:47:08.350934 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 04:47:08.352948 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 04:47:08.355161 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 04:47:08.357316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 04:47:08.357349 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:47:08.358992 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:47:08.361844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 04:47:08.365768 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 04:47:08.370516 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 04:47:08.372801 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 04:47:08.374895 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 04:47:08.384058 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 04:47:08.386345 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 04:47:08.389263 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 04:47:08.392243 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:47:08.393885 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:47:08.395514 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:47:08.395544 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:47:08.396637 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 04:47:08.399345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 04:47:08.402149 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 04:47:08.416482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 04:47:08.419389 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 04:47:08.421027 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 04:47:08.423156 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 04:47:08.425864 jq[1592]: false Nov 5 04:47:08.426575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 04:47:08.431552 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 04:47:08.435231 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 04:47:08.436780 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing passwd entry cache Nov 5 04:47:08.437354 oslogin_cache_refresh[1594]: Refreshing passwd entry cache Nov 5 04:47:08.438397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 04:47:08.445298 extend-filesystems[1593]: Found /dev/vda6 Nov 5 04:47:08.448587 extend-filesystems[1593]: Found /dev/vda9 Nov 5 04:47:08.450381 extend-filesystems[1593]: Checking size of /dev/vda9 Nov 5 04:47:08.452036 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 04:47:08.453760 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 04:47:08.454238 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 04:47:08.455043 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 04:47:08.458821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 04:47:08.462428 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting users, quitting Nov 5 04:47:08.462428 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:47:08.462428 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Refreshing group entry cache Nov 5 04:47:08.461693 oslogin_cache_refresh[1594]: Failure getting users, quitting Nov 5 04:47:08.461715 oslogin_cache_refresh[1594]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:47:08.461764 oslogin_cache_refresh[1594]: Refreshing group entry cache Nov 5 04:47:08.466829 extend-filesystems[1593]: Resized partition /dev/vda9 Nov 5 04:47:08.470640 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 04:47:08.473377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 04:47:08.473654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 04:47:08.475851 jq[1614]: true Nov 5 04:47:08.476155 extend-filesystems[1620]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 04:47:08.474981 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 04:47:08.476554 oslogin_cache_refresh[1594]: Failure getting groups, quitting Nov 5 04:47:08.480900 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Failure getting groups, quitting Nov 5 04:47:08.480900 google_oslogin_nss_cache[1594]: oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:47:08.475254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 04:47:08.481139 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 04:47:08.476569 oslogin_cache_refresh[1594]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:47:08.483121 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 04:47:08.483715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 04:47:08.492265 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 04:47:08.492549 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 04:47:08.517999 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 04:47:08.547074 jq[1623]: true Nov 5 04:47:08.547217 update_engine[1613]: I20251105 04:47:08.540274 1613 main.cc:92] Flatcar Update Engine starting Nov 5 04:47:08.547588 extend-filesystems[1620]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 04:47:08.547588 extend-filesystems[1620]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 04:47:08.547588 extend-filesystems[1620]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 04:47:08.558042 extend-filesystems[1593]: Resized filesystem in /dev/vda9 Nov 5 04:47:08.552602 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 04:47:08.552902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 04:47:08.560219 tar[1621]: linux-amd64/LICENSE Nov 5 04:47:08.560219 tar[1621]: linux-amd64/helm Nov 5 04:47:08.575185 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 04:47:08.575213 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 04:47:08.575475 systemd-logind[1609]: New seat seat0. Nov 5 04:47:08.579954 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 04:47:08.627501 dbus-daemon[1590]: [system] SELinux support is enabled Nov 5 04:47:08.627799 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 04:47:08.632540 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 04:47:08.632581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 04:47:08.633952 dbus-daemon[1590]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 04:47:08.634214 update_engine[1613]: I20251105 04:47:08.634160 1613 update_check_scheduler.cc:74] Next update check in 2m16s Nov 5 04:47:08.635015 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 04:47:08.635043 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 04:47:08.637511 systemd[1]: Started update-engine.service - Update Engine. Nov 5 04:47:08.642396 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 04:47:08.647295 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Nov 5 04:47:08.651929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 04:47:08.655136 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 04:47:08.925525 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 04:47:08.933315 locksmithd[1660]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 04:47:08.985421 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 04:47:09.006202 containerd[1641]: time="2025-11-05T04:47:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 04:47:09.007394 containerd[1641]: time="2025-11-05T04:47:09.007366980Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 5 04:47:09.019265 containerd[1641]: time="2025-11-05T04:47:09.019058736Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.856µs" Nov 5 04:47:09.019265 containerd[1641]: time="2025-11-05T04:47:09.019093552Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 04:47:09.019265 containerd[1641]: time="2025-11-05T04:47:09.019142163Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 04:47:09.019265 containerd[1641]: time="2025-11-05T04:47:09.019153634Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 04:47:09.019399 containerd[1641]: time="2025-11-05T04:47:09.019347197Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 04:47:09.019399 containerd[1641]: time="2025-11-05T04:47:09.019362997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019522 containerd[1641]: time="2025-11-05T04:47:09.019493963Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019522 containerd[1641]: time="2025-11-05T04:47:09.019514802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019802 containerd[1641]: time="2025-11-05T04:47:09.019766734Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019802 containerd[1641]: time="2025-11-05T04:47:09.019789968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019802 containerd[1641]: time="2025-11-05T04:47:09.019800888Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:47:09.019875 containerd[1641]: time="2025-11-05T04:47:09.019810636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020058 containerd[1641]: time="2025-11-05T04:47:09.020027533Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020058 containerd[1641]: time="2025-11-05T04:47:09.020046709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020157 containerd[1641]: time="2025-11-05T04:47:09.020140285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020410 containerd[1641]: time="2025-11-05T04:47:09.020381657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020484 containerd[1641]: time="2025-11-05T04:47:09.020418977Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:47:09.020484 containerd[1641]: time="2025-11-05T04:47:09.020428665Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 04:47:09.020484 containerd[1641]: time="2025-11-05T04:47:09.020470444Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 04:47:09.020687 containerd[1641]: time="2025-11-05T04:47:09.020661763Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 04:47:09.020749 containerd[1641]: time="2025-11-05T04:47:09.020733186Z" level=info msg="metadata content store policy set" policy=shared Nov 5 04:47:09.021759 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 04:47:09.025894 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 04:47:09.028472 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:57816.service - OpenSSH per-connection server daemon (10.0.0.1:57816). Nov 5 04:47:09.055817 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 04:47:09.056385 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 04:47:09.097489 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 04:47:09.118436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 04:47:09.123561 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 04:47:09.130226 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 04:47:09.132229 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 04:47:09.168595 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 57816 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:09.170831 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:09.183369 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 04:47:09.186889 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 04:47:09.192011 systemd-logind[1609]: New session 1 of user core. Nov 5 04:47:09.308184 systemd-networkd[1539]: eth0: Gained IPv6LL Nov 5 04:47:09.312104 tar[1621]: linux-amd64/README.md Nov 5 04:47:09.313321 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 04:47:09.319270 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 04:47:09.323154 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 04:47:09.330692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:09.334926 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 04:47:09.343440 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 04:47:09.346224 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 04:47:09.358646 containerd[1641]: time="2025-11-05T04:47:09.358530798Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 04:47:09.358808 containerd[1641]: time="2025-11-05T04:47:09.358788170Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:47:09.359018 containerd[1641]: time="2025-11-05T04:47:09.358946247Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:47:09.359018 containerd[1641]: time="2025-11-05T04:47:09.359005618Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 04:47:09.359157 containerd[1641]: time="2025-11-05T04:47:09.359138998Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 04:47:09.359195 containerd[1641]: time="2025-11-05T04:47:09.359166700Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 04:47:09.359195 containerd[1641]: time="2025-11-05T04:47:09.359183141Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 04:47:09.359234 containerd[1641]: time="2025-11-05T04:47:09.359203750Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 04:47:09.359254 containerd[1641]: time="2025-11-05T04:47:09.359240348Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 04:47:09.359280 containerd[1641]: time="2025-11-05T04:47:09.359258212Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 04:47:09.359280 containerd[1641]: time="2025-11-05T04:47:09.359276185Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 04:47:09.359317 containerd[1641]: time="2025-11-05T04:47:09.359291835Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 04:47:09.359317 containerd[1641]: time="2025-11-05T04:47:09.359305440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 04:47:09.359353 containerd[1641]: time="2025-11-05T04:47:09.359328844Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 04:47:09.359545 containerd[1641]: time="2025-11-05T04:47:09.359518029Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 04:47:09.359569 containerd[1641]: time="2025-11-05T04:47:09.359561781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 04:47:09.359672 containerd[1641]: time="2025-11-05T04:47:09.359581779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 04:47:09.359672 containerd[1641]: time="2025-11-05T04:47:09.359602938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 04:47:09.359672 containerd[1641]: time="2025-11-05T04:47:09.359619409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 04:47:09.359672 containerd[1641]: time="2025-11-05T04:47:09.359632694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 04:47:09.359672 containerd[1641]: time="2025-11-05T04:47:09.359651800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 04:47:09.359775 containerd[1641]: time="2025-11-05T04:47:09.359684972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 04:47:09.359775 containerd[1641]: time="2025-11-05T04:47:09.359696584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 04:47:09.359775 containerd[1641]: time="2025-11-05T04:47:09.359709458Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 04:47:09.359775 containerd[1641]: time="2025-11-05T04:47:09.359724206Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 04:47:09.359775 containerd[1641]: time="2025-11-05T04:47:09.359772366Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 04:47:09.359941 containerd[1641]: time="2025-11-05T04:47:09.359915765Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 04:47:09.360009 containerd[1641]: time="2025-11-05T04:47:09.359951812Z" level=info msg="Start snapshots syncer" Nov 5 04:47:09.361087 containerd[1641]: time="2025-11-05T04:47:09.361042798Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 04:47:09.361644 containerd[1641]: time="2025-11-05T04:47:09.361569506Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 04:47:09.361916 containerd[1641]: time="2025-11-05T04:47:09.361681216Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 04:47:09.361916 containerd[1641]: time="2025-11-05T04:47:09.361842067Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363266018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363315841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363339596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363351218Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363372397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363389890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363406171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363417883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363430136Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363489126Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363512751Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363521647Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363531385Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:47:09.365248 containerd[1641]: time="2025-11-05T04:47:09.363541294Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 04:47:09.362522 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363561081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363578554Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363626083Z" level=info msg="runtime interface created" Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363633547Z" level=info msg="created NRI interface" Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363647503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363659666Z" level=info msg="Connect containerd service" Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.363682268Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 04:47:09.365762 containerd[1641]: time="2025-11-05T04:47:09.364804233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 04:47:09.365050 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 04:47:09.379201 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 04:47:09.379500 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 04:47:09.382636 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 04:47:09.382910 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 04:47:09.386354 systemd-logind[1609]: New session c1 of user core. Nov 5 04:47:09.559490 systemd[1713]: Queued start job for default target default.target. Nov 5 04:47:09.561600 systemd[1713]: Created slice app.slice - User Application Slice. Nov 5 04:47:09.561626 systemd[1713]: Reached target paths.target - Paths. Nov 5 04:47:09.561672 systemd[1713]: Reached target timers.target - Timers. Nov 5 04:47:09.563467 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 04:47:09.583002 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 04:47:09.583289 systemd[1713]: Reached target sockets.target - Sockets. Nov 5 04:47:09.583381 systemd[1713]: Reached target basic.target - Basic System. Nov 5 04:47:09.583457 systemd[1713]: Reached target default.target - Main User Target. Nov 5 04:47:09.583526 systemd[1713]: Startup finished in 186ms. Nov 5 04:47:09.583940 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 04:47:09.588086 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 04:47:09.608248 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:57828.service - OpenSSH per-connection server daemon (10.0.0.1:57828). Nov 5 04:47:09.706715 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 57828 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:09.708025 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:09.716556 systemd-logind[1609]: New session 2 of user core. Nov 5 04:47:09.730120 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 04:47:09.755500 sshd[1741]: Connection closed by 10.0.0.1 port 57828 Nov 5 04:47:09.755686 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:09.767341 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:57828.service: Deactivated successfully. Nov 5 04:47:09.769718 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 04:47:09.772752 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Nov 5 04:47:09.776449 systemd-logind[1609]: Removed session 2. Nov 5 04:47:09.777334 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:57844.service - OpenSSH per-connection server daemon (10.0.0.1:57844). Nov 5 04:47:09.787302 containerd[1641]: time="2025-11-05T04:47:09.787237566Z" level=info msg="Start subscribing containerd event" Nov 5 04:47:09.787413 containerd[1641]: time="2025-11-05T04:47:09.787342222Z" level=info msg="Start recovering state" Nov 5 04:47:09.787562 containerd[1641]: time="2025-11-05T04:47:09.787276880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 04:47:09.787593 containerd[1641]: time="2025-11-05T04:47:09.787575820Z" level=info msg="Start event monitor" Nov 5 04:47:09.787641 containerd[1641]: time="2025-11-05T04:47:09.787606928Z" level=info msg="Start cni network conf syncer for default" Nov 5 04:47:09.787990 containerd[1641]: time="2025-11-05T04:47:09.787660379Z" level=info msg="Start streaming server" Nov 5 04:47:09.787990 containerd[1641]: time="2025-11-05T04:47:09.787701305Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 04:47:09.787990 containerd[1641]: time="2025-11-05T04:47:09.787714500Z" level=info msg="runtime interface starting up..." Nov 5 04:47:09.787990 containerd[1641]: time="2025-11-05T04:47:09.787723126Z" level=info msg="starting plugins..." Nov 5 04:47:09.787990 containerd[1641]: time="2025-11-05T04:47:09.787744056Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 04:47:09.788162 containerd[1641]: time="2025-11-05T04:47:09.787613421Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 04:47:09.789221 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 04:47:09.793990 containerd[1641]: time="2025-11-05T04:47:09.791774284Z" level=info msg="containerd successfully booted in 0.786199s" Nov 5 04:47:09.855232 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 57844 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:09.856532 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:09.860726 systemd-logind[1609]: New session 3 of user core. Nov 5 04:47:09.867096 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 04:47:09.882840 sshd[1755]: Connection closed by 10.0.0.1 port 57844 Nov 5 04:47:09.883185 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:09.887631 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:57844.service: Deactivated successfully. Nov 5 04:47:09.889521 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 04:47:09.890196 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Nov 5 04:47:09.891380 systemd-logind[1609]: Removed session 3. Nov 5 04:47:10.644209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:10.646801 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 04:47:10.648775 systemd[1]: Startup finished in 3.180s (kernel) + 7.609s (initrd) + 5.850s (userspace) = 16.639s. Nov 5 04:47:10.660328 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:47:11.400837 kubelet[1765]: E1105 04:47:11.400756 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:47:11.404863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:47:11.405096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:47:11.405610 systemd[1]: kubelet.service: Consumed 1.840s CPU time, 268.4M memory peak. Nov 5 04:47:19.912132 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:47778.service - OpenSSH per-connection server daemon (10.0.0.1:47778). Nov 5 04:47:19.994666 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 47778 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:19.996784 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.004109 systemd-logind[1609]: New session 4 of user core. Nov 5 04:47:20.015813 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 04:47:20.032803 sshd[1782]: Connection closed by 10.0.0.1 port 47778 Nov 5 04:47:20.033117 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:20.043041 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:47778.service: Deactivated successfully. Nov 5 04:47:20.045290 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 04:47:20.046144 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Nov 5 04:47:20.049139 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:59614.service - OpenSSH per-connection server daemon (10.0.0.1:59614). Nov 5 04:47:20.049772 systemd-logind[1609]: Removed session 4. Nov 5 04:47:20.101180 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 59614 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:20.102525 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.107447 systemd-logind[1609]: New session 5 of user core. Nov 5 04:47:20.114176 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 04:47:20.123015 sshd[1791]: Connection closed by 10.0.0.1 port 59614 Nov 5 04:47:20.123401 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:20.138023 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:59614.service: Deactivated successfully. Nov 5 04:47:20.140107 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 04:47:20.141021 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Nov 5 04:47:20.144199 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:59620.service - OpenSSH per-connection server daemon (10.0.0.1:59620). Nov 5 04:47:20.144753 systemd-logind[1609]: Removed session 5. Nov 5 04:47:20.196827 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 59620 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:20.198620 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.204332 systemd-logind[1609]: New session 6 of user core. Nov 5 04:47:20.218248 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 04:47:20.235880 sshd[1800]: Connection closed by 10.0.0.1 port 59620 Nov 5 04:47:20.236243 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:20.249786 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:59620.service: Deactivated successfully. Nov 5 04:47:20.251739 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 04:47:20.254166 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Nov 5 04:47:20.256422 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:59632.service - OpenSSH per-connection server daemon (10.0.0.1:59632). Nov 5 04:47:20.257410 systemd-logind[1609]: Removed session 6. Nov 5 04:47:20.321467 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 59632 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:20.323323 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.328104 systemd-logind[1609]: New session 7 of user core. Nov 5 04:47:20.340122 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 04:47:20.366539 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 04:47:20.366909 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:47:20.395598 sudo[1811]: pam_unix(sudo:session): session closed for user root Nov 5 04:47:20.398183 sshd[1810]: Connection closed by 10.0.0.1 port 59632 Nov 5 04:47:20.398765 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:20.420429 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:59632.service: Deactivated successfully. Nov 5 04:47:20.422656 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 04:47:20.423550 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Nov 5 04:47:20.426902 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:59638.service - OpenSSH per-connection server daemon (10.0.0.1:59638). Nov 5 04:47:20.427915 systemd-logind[1609]: Removed session 7. Nov 5 04:47:20.489376 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 59638 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:20.490804 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.495555 systemd-logind[1609]: New session 8 of user core. Nov 5 04:47:20.505312 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 04:47:20.519723 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 04:47:20.520147 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:47:20.526994 sudo[1822]: pam_unix(sudo:session): session closed for user root Nov 5 04:47:20.535184 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 04:47:20.535494 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:47:20.547386 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:47:20.605731 augenrules[1844]: No rules Nov 5 04:47:20.607419 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:47:20.607726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:47:20.608879 sudo[1821]: pam_unix(sudo:session): session closed for user root Nov 5 04:47:20.610599 sshd[1820]: Connection closed by 10.0.0.1 port 59638 Nov 5 04:47:20.611015 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Nov 5 04:47:20.626913 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:59638.service: Deactivated successfully. Nov 5 04:47:20.629142 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 04:47:20.629938 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Nov 5 04:47:20.631910 systemd-logind[1609]: Removed session 8. Nov 5 04:47:20.633641 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Nov 5 04:47:20.697490 sshd[1853]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:47:20.699135 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:47:20.703943 systemd-logind[1609]: New session 9 of user core. Nov 5 04:47:20.724116 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 04:47:20.738561 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 04:47:20.738883 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:47:21.806763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 04:47:21.809391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:21.816509 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 04:47:21.823373 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 04:47:22.146382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:22.160488 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:47:22.254692 kubelet[1892]: E1105 04:47:22.254584 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:47:22.261921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:47:22.262143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:47:22.262551 systemd[1]: kubelet.service: Consumed 391ms CPU time, 110.3M memory peak. Nov 5 04:47:22.275522 dockerd[1879]: time="2025-11-05T04:47:22.275435885Z" level=info msg="Starting up" Nov 5 04:47:22.276385 dockerd[1879]: time="2025-11-05T04:47:22.276346152Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 04:47:22.296025 dockerd[1879]: time="2025-11-05T04:47:22.295949736Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 04:47:22.611638 dockerd[1879]: time="2025-11-05T04:47:22.611522888Z" level=info msg="Loading containers: start." Nov 5 04:47:22.624016 kernel: Initializing XFRM netlink socket Nov 5 04:47:22.932570 systemd-networkd[1539]: docker0: Link UP Nov 5 04:47:22.936955 dockerd[1879]: time="2025-11-05T04:47:22.936891280Z" level=info msg="Loading containers: done." Nov 5 04:47:22.955690 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck425802696-merged.mount: Deactivated successfully. Nov 5 04:47:22.958067 dockerd[1879]: time="2025-11-05T04:47:22.958011067Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 04:47:22.958154 dockerd[1879]: time="2025-11-05T04:47:22.958133287Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 04:47:22.958266 dockerd[1879]: time="2025-11-05T04:47:22.958243413Z" level=info msg="Initializing buildkit" Nov 5 04:47:22.989529 dockerd[1879]: time="2025-11-05T04:47:22.989493027Z" level=info msg="Completed buildkit initialization" Nov 5 04:47:22.995257 dockerd[1879]: time="2025-11-05T04:47:22.995223394Z" level=info msg="Daemon has completed initialization" Nov 5 04:47:22.995411 dockerd[1879]: time="2025-11-05T04:47:22.995325155Z" level=info msg="API listen on /run/docker.sock" Nov 5 04:47:22.995482 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 04:47:24.016040 containerd[1641]: time="2025-11-05T04:47:24.015920495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 04:47:24.641330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851645964.mount: Deactivated successfully. Nov 5 04:47:25.825476 containerd[1641]: time="2025-11-05T04:47:25.825400226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:25.826300 containerd[1641]: time="2025-11-05T04:47:25.826244739Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 5 04:47:25.827884 containerd[1641]: time="2025-11-05T04:47:25.827842496Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:25.830684 containerd[1641]: time="2025-11-05T04:47:25.830609495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:25.831488 containerd[1641]: time="2025-11-05T04:47:25.831441475Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.815399362s" Nov 5 04:47:25.831540 containerd[1641]: time="2025-11-05T04:47:25.831506116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 04:47:25.833018 containerd[1641]: time="2025-11-05T04:47:25.832953481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 04:47:27.430485 containerd[1641]: time="2025-11-05T04:47:27.430376549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:27.431290 containerd[1641]: time="2025-11-05T04:47:27.431248615Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26015441" Nov 5 04:47:27.432857 containerd[1641]: time="2025-11-05T04:47:27.432822707Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:27.435745 containerd[1641]: time="2025-11-05T04:47:27.435688110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:27.437125 containerd[1641]: time="2025-11-05T04:47:27.437074160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.604048724s" Nov 5 04:47:27.437226 containerd[1641]: time="2025-11-05T04:47:27.437131247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 04:47:27.438607 containerd[1641]: time="2025-11-05T04:47:27.438560317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 04:47:28.933731 containerd[1641]: time="2025-11-05T04:47:28.933646196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:28.934673 containerd[1641]: time="2025-11-05T04:47:28.934595617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 5 04:47:28.935901 containerd[1641]: time="2025-11-05T04:47:28.935846252Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:28.938185 containerd[1641]: time="2025-11-05T04:47:28.938144602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:28.939302 containerd[1641]: time="2025-11-05T04:47:28.939259052Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.500648822s" Nov 5 04:47:28.939302 containerd[1641]: time="2025-11-05T04:47:28.939301442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 04:47:28.940020 containerd[1641]: time="2025-11-05T04:47:28.939820135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 04:47:30.295639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509469475.mount: Deactivated successfully. Nov 5 04:47:30.999084 containerd[1641]: time="2025-11-05T04:47:30.999000743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:30.999961 containerd[1641]: time="2025-11-05T04:47:30.999901603Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Nov 5 04:47:31.001219 containerd[1641]: time="2025-11-05T04:47:31.001183677Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:31.003126 containerd[1641]: time="2025-11-05T04:47:31.003090283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:31.003803 containerd[1641]: time="2025-11-05T04:47:31.003738188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.063872879s" Nov 5 04:47:31.003853 containerd[1641]: time="2025-11-05T04:47:31.003803951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 04:47:31.004610 containerd[1641]: time="2025-11-05T04:47:31.004557875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 04:47:31.597595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789504700.mount: Deactivated successfully. Nov 5 04:47:32.416015 containerd[1641]: time="2025-11-05T04:47:32.415941533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:32.416664 containerd[1641]: time="2025-11-05T04:47:32.416641526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Nov 5 04:47:32.417934 containerd[1641]: time="2025-11-05T04:47:32.417900467Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:32.421020 containerd[1641]: time="2025-11-05T04:47:32.420943453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:32.422129 containerd[1641]: time="2025-11-05T04:47:32.422064145Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.417467046s" Nov 5 04:47:32.422179 containerd[1641]: time="2025-11-05T04:47:32.422143003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 04:47:32.422698 containerd[1641]: time="2025-11-05T04:47:32.422674049Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 04:47:32.480670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 04:47:32.482844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:32.731252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:32.750250 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:47:32.864742 kubelet[2249]: E1105 04:47:32.864682 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:47:32.869663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:47:32.869870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:47:32.870298 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.7M memory peak. Nov 5 04:47:33.063311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860694331.mount: Deactivated successfully. Nov 5 04:47:33.069060 containerd[1641]: time="2025-11-05T04:47:33.069008266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:47:33.069920 containerd[1641]: time="2025-11-05T04:47:33.069861386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=2405" Nov 5 04:47:33.071124 containerd[1641]: time="2025-11-05T04:47:33.071092715Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:47:33.073359 containerd[1641]: time="2025-11-05T04:47:33.073321966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:47:33.074024 containerd[1641]: time="2025-11-05T04:47:33.073987685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 651.26796ms" Nov 5 04:47:33.074067 containerd[1641]: time="2025-11-05T04:47:33.074026016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 04:47:33.074546 containerd[1641]: time="2025-11-05T04:47:33.074513581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 04:47:33.707782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449648320.mount: Deactivated successfully. Nov 5 04:47:36.078964 containerd[1641]: time="2025-11-05T04:47:36.078883988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:36.079709 containerd[1641]: time="2025-11-05T04:47:36.079656186Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Nov 5 04:47:36.081157 containerd[1641]: time="2025-11-05T04:47:36.081109862Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:36.084258 containerd[1641]: time="2025-11-05T04:47:36.084194567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:36.085067 containerd[1641]: time="2025-11-05T04:47:36.085014925Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.010472631s" Nov 5 04:47:36.085067 containerd[1641]: time="2025-11-05T04:47:36.085059990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 04:47:40.117351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:40.117554 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.7M memory peak. Nov 5 04:47:40.120383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:40.150638 systemd[1]: Reload requested from client PID 2347 ('systemctl') (unit session-9.scope)... Nov 5 04:47:40.150666 systemd[1]: Reloading... Nov 5 04:47:40.266802 zram_generator::config[2390]: No configuration found. Nov 5 04:47:40.647852 systemd[1]: Reloading finished in 496 ms. Nov 5 04:47:40.728772 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 04:47:40.728884 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 04:47:40.729253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:40.729304 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.2M memory peak. Nov 5 04:47:40.731203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:40.927867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:40.946466 (kubelet)[2438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:47:40.993790 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:47:40.993790 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:47:40.993790 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:47:40.993790 kubelet[2438]: I1105 04:47:40.993358 2438 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:47:41.649040 kubelet[2438]: I1105 04:47:41.648962 2438 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 04:47:41.649040 kubelet[2438]: I1105 04:47:41.649019 2438 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:47:41.649416 kubelet[2438]: I1105 04:47:41.649391 2438 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:47:41.685669 kubelet[2438]: E1105 04:47:41.685613 2438 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 04:47:41.687241 kubelet[2438]: I1105 04:47:41.687209 2438 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:47:41.695024 kubelet[2438]: I1105 04:47:41.694027 2438 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:47:41.700251 kubelet[2438]: I1105 04:47:41.700218 2438 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 04:47:41.700550 kubelet[2438]: I1105 04:47:41.700503 2438 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:47:41.700770 kubelet[2438]: I1105 04:47:41.700538 2438 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:47:41.700920 kubelet[2438]: I1105 04:47:41.700782 2438 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:47:41.700920 kubelet[2438]: I1105 04:47:41.700796 2438 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 04:47:41.701625 kubelet[2438]: I1105 04:47:41.701601 2438 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:47:41.703867 kubelet[2438]: I1105 04:47:41.703834 2438 kubelet.go:480] "Attempting to sync node with API server" Nov 5 04:47:41.703914 kubelet[2438]: I1105 04:47:41.703876 2438 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:47:41.703939 kubelet[2438]: I1105 04:47:41.703924 2438 kubelet.go:386] "Adding apiserver pod source" Nov 5 04:47:41.703962 kubelet[2438]: I1105 04:47:41.703950 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:47:41.710017 kubelet[2438]: E1105 04:47:41.709481 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 04:47:41.710017 kubelet[2438]: E1105 04:47:41.709610 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 04:47:41.710017 kubelet[2438]: I1105 04:47:41.709644 2438 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:47:41.710701 kubelet[2438]: I1105 04:47:41.710674 2438 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:47:41.711642 kubelet[2438]: W1105 04:47:41.711617 2438 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 04:47:41.758292 kubelet[2438]: I1105 04:47:41.758246 2438 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 04:47:41.758407 kubelet[2438]: I1105 04:47:41.758330 2438 server.go:1289] "Started kubelet" Nov 5 04:47:41.758925 kubelet[2438]: I1105 04:47:41.758893 2438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:47:41.761247 kubelet[2438]: I1105 04:47:41.760792 2438 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:47:41.761247 kubelet[2438]: I1105 04:47:41.760799 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:47:41.761247 kubelet[2438]: I1105 04:47:41.760937 2438 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:47:41.763280 kubelet[2438]: I1105 04:47:41.763236 2438 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:47:41.763748 kubelet[2438]: I1105 04:47:41.763726 2438 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 04:47:41.763888 kubelet[2438]: I1105 04:47:41.763867 2438 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 04:47:41.763986 kubelet[2438]: I1105 04:47:41.763952 2438 reconciler.go:26] "Reconciler: start to sync state" Nov 5 04:47:41.764547 kubelet[2438]: E1105 04:47:41.764492 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 04:47:41.764612 kubelet[2438]: I1105 04:47:41.764588 2438 server.go:317] "Adding debug handlers to kubelet server" Nov 5 04:47:41.765909 kubelet[2438]: I1105 04:47:41.764843 2438 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:47:41.765909 kubelet[2438]: I1105 04:47:41.764927 2438 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:47:41.766178 kubelet[2438]: I1105 04:47:41.766143 2438 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:47:41.766240 kubelet[2438]: E1105 04:47:41.766204 2438 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:47:41.766343 kubelet[2438]: E1105 04:47:41.766301 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" Nov 5 04:47:41.766471 kubelet[2438]: E1105 04:47:41.766440 2438 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 04:47:41.766669 kubelet[2438]: E1105 04:47:41.764344 2438 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187502f0a45b78c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 04:47:41.758281922 +0000 UTC m=+0.806279388,LastTimestamp:2025-11-05 04:47:41.758281922 +0000 UTC m=+0.806279388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 04:47:41.788507 kubelet[2438]: I1105 04:47:41.788467 2438 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:47:41.788507 kubelet[2438]: I1105 04:47:41.788489 2438 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:47:41.788507 kubelet[2438]: I1105 04:47:41.788508 2438 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:47:41.789828 kubelet[2438]: I1105 04:47:41.789799 2438 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 04:47:41.791682 kubelet[2438]: I1105 04:47:41.791655 2438 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 04:47:41.791765 kubelet[2438]: I1105 04:47:41.791695 2438 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 04:47:41.791765 kubelet[2438]: I1105 04:47:41.791722 2438 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:47:41.791765 kubelet[2438]: I1105 04:47:41.791737 2438 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 04:47:41.791860 kubelet[2438]: E1105 04:47:41.791808 2438 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:47:41.792998 kubelet[2438]: E1105 04:47:41.792441 2438 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 04:47:41.793212 kubelet[2438]: I1105 04:47:41.793195 2438 policy_none.go:49] "None policy: Start" Nov 5 04:47:41.793467 kubelet[2438]: I1105 04:47:41.793452 2438 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 04:47:41.793542 kubelet[2438]: I1105 04:47:41.793533 2438 state_mem.go:35] "Initializing new in-memory state store" Nov 5 04:47:41.799625 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 04:47:41.819056 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 04:47:41.822158 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 04:47:41.830026 kubelet[2438]: E1105 04:47:41.829880 2438 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:47:41.830177 kubelet[2438]: I1105 04:47:41.830158 2438 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:47:41.830209 kubelet[2438]: I1105 04:47:41.830180 2438 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:47:41.830455 kubelet[2438]: I1105 04:47:41.830438 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:47:41.831050 kubelet[2438]: E1105 04:47:41.831022 2438 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:47:41.831099 kubelet[2438]: E1105 04:47:41.831082 2438 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 04:47:41.906006 systemd[1]: Created slice kubepods-burstable-pod514334f5cecf5d3214455fed7c727f63.slice - libcontainer container kubepods-burstable-pod514334f5cecf5d3214455fed7c727f63.slice. Nov 5 04:47:41.926938 kubelet[2438]: E1105 04:47:41.926903 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:41.930310 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 5 04:47:41.931329 kubelet[2438]: I1105 04:47:41.931294 2438 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:47:41.931719 kubelet[2438]: E1105 04:47:41.931688 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Nov 5 04:47:41.942295 kubelet[2438]: E1105 04:47:41.942252 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:41.945390 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 5 04:47:41.947375 kubelet[2438]: E1105 04:47:41.947353 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:41.964651 kubelet[2438]: I1105 04:47:41.964615 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:41.966931 kubelet[2438]: E1105 04:47:41.966906 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" Nov 5 04:47:42.065392 kubelet[2438]: I1105 04:47:42.065303 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:42.065392 kubelet[2438]: I1105 04:47:42.065369 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:42.065392 kubelet[2438]: I1105 04:47:42.065393 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:42.065991 kubelet[2438]: I1105 04:47:42.065415 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:42.065991 kubelet[2438]: I1105 04:47:42.065466 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:42.065991 kubelet[2438]: I1105 04:47:42.065484 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:42.065991 kubelet[2438]: I1105 04:47:42.065509 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:42.065991 kubelet[2438]: I1105 04:47:42.065545 2438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:42.134117 kubelet[2438]: I1105 04:47:42.134068 2438 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:47:42.134503 kubelet[2438]: E1105 04:47:42.134470 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Nov 5 04:47:42.228147 kubelet[2438]: E1105 04:47:42.227952 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.229095 containerd[1641]: time="2025-11-05T04:47:42.229036893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:514334f5cecf5d3214455fed7c727f63,Namespace:kube-system,Attempt:0,}" Nov 5 04:47:42.243326 kubelet[2438]: E1105 04:47:42.243286 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.243834 containerd[1641]: time="2025-11-05T04:47:42.243786681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 5 04:47:42.249026 kubelet[2438]: E1105 04:47:42.248995 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.249611 containerd[1641]: time="2025-11-05T04:47:42.249555688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 5 04:47:42.265987 containerd[1641]: time="2025-11-05T04:47:42.265911372Z" level=info msg="connecting to shim eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243" address="unix:///run/containerd/s/8d0ec56d8398c759699d1a99a6345dac87e228a5acab1262e521f2a43595e8c0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:47:42.291205 containerd[1641]: time="2025-11-05T04:47:42.291121274Z" level=info msg="connecting to shim e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d" address="unix:///run/containerd/s/48191abc75ac2698b444011ec544ec731b428084305c76e3c28a3edfda6c65cc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:47:42.306205 containerd[1641]: time="2025-11-05T04:47:42.305865812Z" level=info msg="connecting to shim bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2" address="unix:///run/containerd/s/86743d519c4994cfb4572aa00860261b5db3f7e6d13fd73af24274f115462ad3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:47:42.332240 systemd[1]: Started cri-containerd-eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243.scope - libcontainer container eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243. Nov 5 04:47:42.359135 systemd[1]: Started cri-containerd-bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2.scope - libcontainer container bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2. Nov 5 04:47:42.363681 systemd[1]: Started cri-containerd-e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d.scope - libcontainer container e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d. Nov 5 04:47:42.368312 kubelet[2438]: E1105 04:47:42.368271 2438 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" Nov 5 04:47:42.408612 containerd[1641]: time="2025-11-05T04:47:42.408486118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:514334f5cecf5d3214455fed7c727f63,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243\"" Nov 5 04:47:42.409830 kubelet[2438]: E1105 04:47:42.409797 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.424654 containerd[1641]: time="2025-11-05T04:47:42.424552557Z" level=info msg="CreateContainer within sandbox \"eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 04:47:42.434113 containerd[1641]: time="2025-11-05T04:47:42.434066693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d\"" Nov 5 04:47:42.434861 kubelet[2438]: E1105 04:47:42.434823 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.438651 containerd[1641]: time="2025-11-05T04:47:42.438605035Z" level=info msg="Container d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:47:42.439197 containerd[1641]: time="2025-11-05T04:47:42.439148900Z" level=info msg="CreateContainer within sandbox \"e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 04:47:42.440213 containerd[1641]: time="2025-11-05T04:47:42.440181214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2\"" Nov 5 04:47:42.440691 kubelet[2438]: E1105 04:47:42.440662 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.445242 containerd[1641]: time="2025-11-05T04:47:42.445198437Z" level=info msg="CreateContainer within sandbox \"bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 04:47:42.451634 containerd[1641]: time="2025-11-05T04:47:42.451592035Z" level=info msg="Container 4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:47:42.452762 containerd[1641]: time="2025-11-05T04:47:42.452730272Z" level=info msg="CreateContainer within sandbox \"eb7f4f02237696cf986b731953ef916f5608d20dafbc3f1c5b32b80a02eaa243\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd\"" Nov 5 04:47:42.455431 containerd[1641]: time="2025-11-05T04:47:42.455375938Z" level=info msg="StartContainer for \"d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd\"" Nov 5 04:47:42.458678 containerd[1641]: time="2025-11-05T04:47:42.458023726Z" level=info msg="connecting to shim d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd" address="unix:///run/containerd/s/8d0ec56d8398c759699d1a99a6345dac87e228a5acab1262e521f2a43595e8c0" protocol=ttrpc version=3 Nov 5 04:47:42.462530 containerd[1641]: time="2025-11-05T04:47:42.462495471Z" level=info msg="Container 1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:47:42.470856 containerd[1641]: time="2025-11-05T04:47:42.470809079Z" level=info msg="CreateContainer within sandbox \"e4e79e4873ccdc27956527933bf606abf6cc49e409a840bb21827aecc5b44b9d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f\"" Nov 5 04:47:42.471535 containerd[1641]: time="2025-11-05T04:47:42.471515767Z" level=info msg="StartContainer for \"4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f\"" Nov 5 04:47:42.472688 containerd[1641]: time="2025-11-05T04:47:42.472666789Z" level=info msg="connecting to shim 4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f" address="unix:///run/containerd/s/48191abc75ac2698b444011ec544ec731b428084305c76e3c28a3edfda6c65cc" protocol=ttrpc version=3 Nov 5 04:47:42.479303 containerd[1641]: time="2025-11-05T04:47:42.479087329Z" level=info msg="CreateContainer within sandbox \"bfa9ad6ad6289b804fa5237c73569a777abda8b30aac56a654385abe19bb9be2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02\"" Nov 5 04:47:42.479861 containerd[1641]: time="2025-11-05T04:47:42.479834415Z" level=info msg="StartContainer for \"1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02\"" Nov 5 04:47:42.482073 containerd[1641]: time="2025-11-05T04:47:42.481995007Z" level=info msg="connecting to shim 1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02" address="unix:///run/containerd/s/86743d519c4994cfb4572aa00860261b5db3f7e6d13fd73af24274f115462ad3" protocol=ttrpc version=3 Nov 5 04:47:42.482304 systemd[1]: Started cri-containerd-d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd.scope - libcontainer container d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd. Nov 5 04:47:42.500154 systemd[1]: Started cri-containerd-4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f.scope - libcontainer container 4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f. Nov 5 04:47:42.537757 kubelet[2438]: I1105 04:47:42.537295 2438 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:47:42.537757 kubelet[2438]: E1105 04:47:42.537714 2438 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" Nov 5 04:47:42.610120 systemd[1]: Started cri-containerd-1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02.scope - libcontainer container 1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02. Nov 5 04:47:42.658419 containerd[1641]: time="2025-11-05T04:47:42.658369304Z" level=info msg="StartContainer for \"d3843bc84e4b391e7066adb37d2e97778441e08e6b8b1ee3ac7194917e648efd\" returns successfully" Nov 5 04:47:42.680004 containerd[1641]: time="2025-11-05T04:47:42.678158607Z" level=info msg="StartContainer for \"4b742fc34c822a55857d9a428a39f0586c5599ae0acc6cb2a54c7ada2ebbd56f\" returns successfully" Nov 5 04:47:42.697648 containerd[1641]: time="2025-11-05T04:47:42.697562329Z" level=info msg="StartContainer for \"1373de964ebf98da94c6f699ce8cbf4abd42d6c3f9a0a8a96e5f2a4c403cee02\" returns successfully" Nov 5 04:47:42.800530 kubelet[2438]: E1105 04:47:42.800484 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:42.800671 kubelet[2438]: E1105 04:47:42.800608 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.802425 kubelet[2438]: E1105 04:47:42.802399 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:42.802533 kubelet[2438]: E1105 04:47:42.802511 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:42.804950 kubelet[2438]: E1105 04:47:42.804927 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:42.805053 kubelet[2438]: E1105 04:47:42.805032 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:43.341917 kubelet[2438]: I1105 04:47:43.341868 2438 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:47:43.807854 kubelet[2438]: E1105 04:47:43.807464 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:43.807854 kubelet[2438]: E1105 04:47:43.807718 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:43.810897 kubelet[2438]: E1105 04:47:43.810623 2438 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:47:43.811011 kubelet[2438]: E1105 04:47:43.810961 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:44.554078 kubelet[2438]: E1105 04:47:44.554017 2438 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 04:47:44.707415 kubelet[2438]: I1105 04:47:44.707053 2438 apiserver.go:52] "Watching apiserver" Nov 5 04:47:44.719992 kubelet[2438]: I1105 04:47:44.719905 2438 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:47:44.765101 kubelet[2438]: I1105 04:47:44.765035 2438 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 04:47:44.767566 kubelet[2438]: I1105 04:47:44.767248 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:44.775184 kubelet[2438]: E1105 04:47:44.775143 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:44.775184 kubelet[2438]: I1105 04:47:44.775177 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:44.778025 kubelet[2438]: E1105 04:47:44.777098 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:44.778025 kubelet[2438]: I1105 04:47:44.777135 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:44.778484 kubelet[2438]: E1105 04:47:44.778450 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:44.807799 kubelet[2438]: I1105 04:47:44.807674 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:44.809350 kubelet[2438]: E1105 04:47:44.809327 2438 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:44.809524 kubelet[2438]: E1105 04:47:44.809481 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:46.720618 kubelet[2438]: I1105 04:47:46.720565 2438 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:46.759739 kubelet[2438]: E1105 04:47:46.759700 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:46.806450 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-9.scope)... Nov 5 04:47:46.806471 systemd[1]: Reloading... Nov 5 04:47:46.810735 kubelet[2438]: E1105 04:47:46.810706 2438 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:46.892014 zram_generator::config[2769]: No configuration found. Nov 5 04:47:47.124057 systemd[1]: Reloading finished in 317 ms. Nov 5 04:47:47.161238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:47.188507 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 04:47:47.188829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:47.188891 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 130.9M memory peak. Nov 5 04:47:47.191050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:47:47.402575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:47:47.413377 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:47:47.459032 kubelet[2811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:47:47.459032 kubelet[2811]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:47:47.459032 kubelet[2811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:47:47.459444 kubelet[2811]: I1105 04:47:47.459058 2811 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:47:47.468059 kubelet[2811]: I1105 04:47:47.468027 2811 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 04:47:47.468059 kubelet[2811]: I1105 04:47:47.468050 2811 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:47:47.468273 kubelet[2811]: I1105 04:47:47.468257 2811 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:47:47.469452 kubelet[2811]: I1105 04:47:47.469433 2811 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 04:47:47.471885 kubelet[2811]: I1105 04:47:47.471555 2811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:47:47.477407 kubelet[2811]: I1105 04:47:47.477047 2811 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:47:47.482733 kubelet[2811]: I1105 04:47:47.482693 2811 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 04:47:47.482950 kubelet[2811]: I1105 04:47:47.482914 2811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:47:47.483187 kubelet[2811]: I1105 04:47:47.482938 2811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:47:47.483293 kubelet[2811]: I1105 04:47:47.483189 2811 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:47:47.483293 kubelet[2811]: I1105 04:47:47.483201 2811 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 04:47:47.483293 kubelet[2811]: I1105 04:47:47.483254 2811 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:47:47.483441 kubelet[2811]: I1105 04:47:47.483411 2811 kubelet.go:480] "Attempting to sync node with API server" Nov 5 04:47:47.483441 kubelet[2811]: I1105 04:47:47.483428 2811 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:47:47.483515 kubelet[2811]: I1105 04:47:47.483452 2811 kubelet.go:386] "Adding apiserver pod source" Nov 5 04:47:47.483515 kubelet[2811]: I1105 04:47:47.483468 2811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:47:47.485671 kubelet[2811]: I1105 04:47:47.485565 2811 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:47:47.486347 kubelet[2811]: I1105 04:47:47.486323 2811 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:47:47.492106 kubelet[2811]: I1105 04:47:47.491385 2811 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 04:47:47.492106 kubelet[2811]: I1105 04:47:47.491458 2811 server.go:1289] "Started kubelet" Nov 5 04:47:47.492502 kubelet[2811]: I1105 04:47:47.492434 2811 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:47:47.492502 kubelet[2811]: I1105 04:47:47.492459 2811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:47:47.493506 kubelet[2811]: I1105 04:47:47.493484 2811 server.go:317] "Adding debug handlers to kubelet server" Nov 5 04:47:47.495132 kubelet[2811]: I1105 04:47:47.495066 2811 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:47:47.502700 kubelet[2811]: I1105 04:47:47.502670 2811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:47:47.503707 kubelet[2811]: E1105 04:47:47.503686 2811 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 04:47:47.503848 kubelet[2811]: I1105 04:47:47.503823 2811 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:47:47.505137 kubelet[2811]: I1105 04:47:47.504679 2811 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 04:47:47.505137 kubelet[2811]: I1105 04:47:47.504761 2811 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 04:47:47.505137 kubelet[2811]: I1105 04:47:47.504890 2811 reconciler.go:26] "Reconciler: start to sync state" Nov 5 04:47:47.510133 kubelet[2811]: I1105 04:47:47.510093 2811 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:47:47.510133 kubelet[2811]: I1105 04:47:47.510119 2811 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:47:47.510272 kubelet[2811]: I1105 04:47:47.510212 2811 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:47:47.525124 kubelet[2811]: I1105 04:47:47.525068 2811 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 04:47:47.526708 kubelet[2811]: I1105 04:47:47.526666 2811 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 04:47:47.526708 kubelet[2811]: I1105 04:47:47.526709 2811 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 04:47:47.526782 kubelet[2811]: I1105 04:47:47.526732 2811 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:47:47.526782 kubelet[2811]: I1105 04:47:47.526741 2811 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 04:47:47.526879 kubelet[2811]: E1105 04:47:47.526825 2811 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:47:47.553507 kubelet[2811]: I1105 04:47:47.553480 2811 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:47:47.553722 kubelet[2811]: I1105 04:47:47.553707 2811 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:47:47.553808 kubelet[2811]: I1105 04:47:47.553799 2811 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:47:47.553998 kubelet[2811]: I1105 04:47:47.553962 2811 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 04:47:47.554078 kubelet[2811]: I1105 04:47:47.554056 2811 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 04:47:47.554129 kubelet[2811]: I1105 04:47:47.554121 2811 policy_none.go:49] "None policy: Start" Nov 5 04:47:47.554191 kubelet[2811]: I1105 04:47:47.554181 2811 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 04:47:47.554249 kubelet[2811]: I1105 04:47:47.554240 2811 state_mem.go:35] "Initializing new in-memory state store" Nov 5 04:47:47.554388 kubelet[2811]: I1105 04:47:47.554376 2811 state_mem.go:75] "Updated machine memory state" Nov 5 04:47:47.558163 kubelet[2811]: E1105 04:47:47.558140 2811 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:47:47.558331 kubelet[2811]: I1105 04:47:47.558314 2811 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:47:47.558383 kubelet[2811]: I1105 04:47:47.558331 2811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:47:47.558515 kubelet[2811]: I1105 04:47:47.558499 2811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:47:47.563463 kubelet[2811]: E1105 04:47:47.563440 2811 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:47:47.627937 kubelet[2811]: I1105 04:47:47.627903 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:47.628236 kubelet[2811]: I1105 04:47:47.627939 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.628310 kubelet[2811]: I1105 04:47:47.628042 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:47.633957 kubelet[2811]: E1105 04:47:47.633915 2811 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:47.667248 kubelet[2811]: I1105 04:47:47.667156 2811 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:47:47.676964 kubelet[2811]: I1105 04:47:47.676339 2811 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 04:47:47.676964 kubelet[2811]: I1105 04:47:47.676420 2811 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:47:47.705749 kubelet[2811]: I1105 04:47:47.705699 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:47.705749 kubelet[2811]: I1105 04:47:47.705739 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:47.705749 kubelet[2811]: I1105 04:47:47.705762 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.706461 kubelet[2811]: I1105 04:47:47.705778 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.706461 kubelet[2811]: I1105 04:47:47.705797 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.706461 kubelet[2811]: I1105 04:47:47.705812 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/514334f5cecf5d3214455fed7c727f63-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"514334f5cecf5d3214455fed7c727f63\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:47.706461 kubelet[2811]: I1105 04:47:47.705827 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.706461 kubelet[2811]: I1105 04:47:47.705844 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:47:47.706582 kubelet[2811]: I1105 04:47:47.705859 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:47.932959 kubelet[2811]: E1105 04:47:47.932821 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:47.934175 kubelet[2811]: E1105 04:47:47.934096 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:47.934369 kubelet[2811]: E1105 04:47:47.934338 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:48.485547 kubelet[2811]: I1105 04:47:48.485456 2811 apiserver.go:52] "Watching apiserver" Nov 5 04:47:48.505061 kubelet[2811]: I1105 04:47:48.505023 2811 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 04:47:48.541472 kubelet[2811]: I1105 04:47:48.541288 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:48.541472 kubelet[2811]: I1105 04:47:48.541319 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:48.541951 kubelet[2811]: E1105 04:47:48.541901 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:48.550045 kubelet[2811]: E1105 04:47:48.548816 2811 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 04:47:48.550045 kubelet[2811]: E1105 04:47:48.548960 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:48.550045 kubelet[2811]: E1105 04:47:48.549236 2811 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 04:47:48.550045 kubelet[2811]: E1105 04:47:48.549372 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:48.570465 kubelet[2811]: I1105 04:47:48.570381 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.570353412 podStartE2EDuration="1.570353412s" podCreationTimestamp="2025-11-05 04:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:47:48.57000117 +0000 UTC m=+1.149216374" watchObservedRunningTime="2025-11-05 04:47:48.570353412 +0000 UTC m=+1.149568626" Nov 5 04:47:48.587640 kubelet[2811]: I1105 04:47:48.587550 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.587525839 podStartE2EDuration="1.587525839s" podCreationTimestamp="2025-11-05 04:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:47:48.57883244 +0000 UTC m=+1.158047654" watchObservedRunningTime="2025-11-05 04:47:48.587525839 +0000 UTC m=+1.166741053" Nov 5 04:47:48.587640 kubelet[2811]: I1105 04:47:48.587648 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.587643853 podStartE2EDuration="2.587643853s" podCreationTimestamp="2025-11-05 04:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:47:48.587305859 +0000 UTC m=+1.166521063" watchObservedRunningTime="2025-11-05 04:47:48.587643853 +0000 UTC m=+1.166859067" Nov 5 04:47:49.542936 kubelet[2811]: E1105 04:47:49.542897 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:49.543545 kubelet[2811]: E1105 04:47:49.543116 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:50.544570 kubelet[2811]: E1105 04:47:50.544501 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:53.075670 kubelet[2811]: E1105 04:47:53.075629 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:53.415605 kubelet[2811]: I1105 04:47:53.415475 2811 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 04:47:53.416044 containerd[1641]: time="2025-11-05T04:47:53.415952766Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 04:47:53.416458 kubelet[2811]: I1105 04:47:53.416174 2811 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 04:47:53.554059 kubelet[2811]: E1105 04:47:53.553778 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:54.010233 update_engine[1613]: I20251105 04:47:54.010078 1613 update_attempter.cc:509] Updating boot flags... Nov 5 04:47:54.185470 systemd[1]: Created slice kubepods-besteffort-pod7bed5cf2_5860_413b_af87_705c830d95a9.slice - libcontainer container kubepods-besteffort-pod7bed5cf2_5860_413b_af87_705c830d95a9.slice. Nov 5 04:47:54.244065 kubelet[2811]: I1105 04:47:54.244017 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvnx\" (UniqueName: \"kubernetes.io/projected/7bed5cf2-5860-413b-af87-705c830d95a9-kube-api-access-4lvnx\") pod \"kube-proxy-2f5lc\" (UID: \"7bed5cf2-5860-413b-af87-705c830d95a9\") " pod="kube-system/kube-proxy-2f5lc" Nov 5 04:47:54.244065 kubelet[2811]: I1105 04:47:54.244050 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bed5cf2-5860-413b-af87-705c830d95a9-kube-proxy\") pod \"kube-proxy-2f5lc\" (UID: \"7bed5cf2-5860-413b-af87-705c830d95a9\") " pod="kube-system/kube-proxy-2f5lc" Nov 5 04:47:54.244065 kubelet[2811]: I1105 04:47:54.244073 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bed5cf2-5860-413b-af87-705c830d95a9-xtables-lock\") pod \"kube-proxy-2f5lc\" (UID: \"7bed5cf2-5860-413b-af87-705c830d95a9\") " pod="kube-system/kube-proxy-2f5lc" Nov 5 04:47:54.244557 kubelet[2811]: I1105 04:47:54.244092 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bed5cf2-5860-413b-af87-705c830d95a9-lib-modules\") pod \"kube-proxy-2f5lc\" (UID: \"7bed5cf2-5860-413b-af87-705c830d95a9\") " pod="kube-system/kube-proxy-2f5lc" Nov 5 04:47:54.349761 kubelet[2811]: E1105 04:47:54.349642 2811 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 04:47:54.349761 kubelet[2811]: E1105 04:47:54.349678 2811 projected.go:194] Error preparing data for projected volume kube-api-access-4lvnx for pod kube-system/kube-proxy-2f5lc: configmap "kube-root-ca.crt" not found Nov 5 04:47:54.349919 kubelet[2811]: E1105 04:47:54.349768 2811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bed5cf2-5860-413b-af87-705c830d95a9-kube-api-access-4lvnx podName:7bed5cf2-5860-413b-af87-705c830d95a9 nodeName:}" failed. No retries permitted until 2025-11-05 04:47:54.849744814 +0000 UTC m=+7.428960028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4lvnx" (UniqueName: "kubernetes.io/projected/7bed5cf2-5860-413b-af87-705c830d95a9-kube-api-access-4lvnx") pod "kube-proxy-2f5lc" (UID: "7bed5cf2-5860-413b-af87-705c830d95a9") : configmap "kube-root-ca.crt" not found Nov 5 04:47:54.650880 systemd[1]: Created slice kubepods-besteffort-pod401f218b_e798_455e_b05b_c97f8bfbc72d.slice - libcontainer container kubepods-besteffort-pod401f218b_e798_455e_b05b_c97f8bfbc72d.slice. Nov 5 04:47:54.746640 kubelet[2811]: I1105 04:47:54.746561 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/401f218b-e798-455e-b05b-c97f8bfbc72d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hhswp\" (UID: \"401f218b-e798-455e-b05b-c97f8bfbc72d\") " pod="tigera-operator/tigera-operator-7dcd859c48-hhswp" Nov 5 04:47:54.746640 kubelet[2811]: I1105 04:47:54.746631 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45kr\" (UniqueName: \"kubernetes.io/projected/401f218b-e798-455e-b05b-c97f8bfbc72d-kube-api-access-s45kr\") pod \"tigera-operator-7dcd859c48-hhswp\" (UID: \"401f218b-e798-455e-b05b-c97f8bfbc72d\") " pod="tigera-operator/tigera-operator-7dcd859c48-hhswp" Nov 5 04:47:54.955754 containerd[1641]: time="2025-11-05T04:47:54.955611293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hhswp,Uid:401f218b-e798-455e-b05b-c97f8bfbc72d,Namespace:tigera-operator,Attempt:0,}" Nov 5 04:47:55.002134 containerd[1641]: time="2025-11-05T04:47:55.002073062Z" level=info msg="connecting to shim f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04" address="unix:///run/containerd/s/12032e7e39fea18ae3aa846ab5f9802cc05ca6c6e9e5603dfc5f5b9be8a78314" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:47:55.057132 systemd[1]: Started cri-containerd-f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04.scope - libcontainer container f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04. Nov 5 04:47:55.095958 kubelet[2811]: E1105 04:47:55.095906 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:55.097151 containerd[1641]: time="2025-11-05T04:47:55.097051120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2f5lc,Uid:7bed5cf2-5860-413b-af87-705c830d95a9,Namespace:kube-system,Attempt:0,}" Nov 5 04:47:55.107494 containerd[1641]: time="2025-11-05T04:47:55.107439722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hhswp,Uid:401f218b-e798-455e-b05b-c97f8bfbc72d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04\"" Nov 5 04:47:55.109619 containerd[1641]: time="2025-11-05T04:47:55.109573978Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 04:47:55.123947 containerd[1641]: time="2025-11-05T04:47:55.123582981Z" level=info msg="connecting to shim 35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28" address="unix:///run/containerd/s/c2cafac74129a1a22c8fea13a54c379713edb2302edd54164afc9032a464bc8c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:47:55.152146 systemd[1]: Started cri-containerd-35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28.scope - libcontainer container 35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28. Nov 5 04:47:55.182716 containerd[1641]: time="2025-11-05T04:47:55.182647867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2f5lc,Uid:7bed5cf2-5860-413b-af87-705c830d95a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28\"" Nov 5 04:47:55.183569 kubelet[2811]: E1105 04:47:55.183540 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:55.189684 containerd[1641]: time="2025-11-05T04:47:55.189647375Z" level=info msg="CreateContainer within sandbox \"35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 04:47:55.202273 containerd[1641]: time="2025-11-05T04:47:55.202221760Z" level=info msg="Container 285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:47:55.211041 containerd[1641]: time="2025-11-05T04:47:55.210944853Z" level=info msg="CreateContainer within sandbox \"35b036bcc7285bb0c2cfbe3dc51d346561999520a24231d678b88dbc4bcbae28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca\"" Nov 5 04:47:55.211671 containerd[1641]: time="2025-11-05T04:47:55.211627397Z" level=info msg="StartContainer for \"285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca\"" Nov 5 04:47:55.213369 containerd[1641]: time="2025-11-05T04:47:55.213334564Z" level=info msg="connecting to shim 285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca" address="unix:///run/containerd/s/c2cafac74129a1a22c8fea13a54c379713edb2302edd54164afc9032a464bc8c" protocol=ttrpc version=3 Nov 5 04:47:55.237218 systemd[1]: Started cri-containerd-285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca.scope - libcontainer container 285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca. Nov 5 04:47:55.296998 containerd[1641]: time="2025-11-05T04:47:55.296923204Z" level=info msg="StartContainer for \"285c4b8adf62249db880f61e3d91a44714b40c7594367721c15651fb2d3749ca\" returns successfully" Nov 5 04:47:55.558906 kubelet[2811]: E1105 04:47:55.558854 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:55.577859 kubelet[2811]: I1105 04:47:55.577766 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2f5lc" podStartSLOduration=1.577745176 podStartE2EDuration="1.577745176s" podCreationTimestamp="2025-11-05 04:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:47:55.577716161 +0000 UTC m=+8.156931405" watchObservedRunningTime="2025-11-05 04:47:55.577745176 +0000 UTC m=+8.156960380" Nov 5 04:47:56.078050 kubelet[2811]: E1105 04:47:56.078015 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:56.441373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248269746.mount: Deactivated successfully. Nov 5 04:47:56.561875 kubelet[2811]: E1105 04:47:56.561828 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:56.603432 kubelet[2811]: E1105 04:47:56.603381 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:57.565066 kubelet[2811]: E1105 04:47:57.564856 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:57.565066 kubelet[2811]: E1105 04:47:57.565033 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:58.082929 containerd[1641]: time="2025-11-05T04:47:58.082866125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:58.084009 containerd[1641]: time="2025-11-05T04:47:58.083943453Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 5 04:47:58.084958 containerd[1641]: time="2025-11-05T04:47:58.084914089Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:58.087144 containerd[1641]: time="2025-11-05T04:47:58.087100576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:47:58.089993 containerd[1641]: time="2025-11-05T04:47:58.088505174Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.978857346s" Nov 5 04:47:58.089993 containerd[1641]: time="2025-11-05T04:47:58.088537114Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 04:47:58.095834 containerd[1641]: time="2025-11-05T04:47:58.095779446Z" level=info msg="CreateContainer within sandbox \"f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 04:47:58.105035 containerd[1641]: time="2025-11-05T04:47:58.104959614Z" level=info msg="Container cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:47:58.111624 containerd[1641]: time="2025-11-05T04:47:58.111576353Z" level=info msg="CreateContainer within sandbox \"f8ff204bd6b7bc613719353ff308fb98a7f9134580866d16916ebe697be7ec04\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435\"" Nov 5 04:47:58.112033 containerd[1641]: time="2025-11-05T04:47:58.111991809Z" level=info msg="StartContainer for \"cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435\"" Nov 5 04:47:58.112845 containerd[1641]: time="2025-11-05T04:47:58.112823773Z" level=info msg="connecting to shim cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435" address="unix:///run/containerd/s/12032e7e39fea18ae3aa846ab5f9802cc05ca6c6e9e5603dfc5f5b9be8a78314" protocol=ttrpc version=3 Nov 5 04:47:58.136114 systemd[1]: Started cri-containerd-cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435.scope - libcontainer container cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435. Nov 5 04:47:58.169424 containerd[1641]: time="2025-11-05T04:47:58.169381126Z" level=info msg="StartContainer for \"cf54f06a9f28c371979bc437e7629b1c77b6fb5f0c55b5661b4c24937f000435\" returns successfully" Nov 5 04:47:58.568154 kubelet[2811]: E1105 04:47:58.568111 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:47:58.577179 kubelet[2811]: I1105 04:47:58.577078 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hhswp" podStartSLOduration=1.5952712340000001 podStartE2EDuration="4.577057814s" podCreationTimestamp="2025-11-05 04:47:54 +0000 UTC" firstStartedPulling="2025-11-05 04:47:55.109088257 +0000 UTC m=+7.688303471" lastFinishedPulling="2025-11-05 04:47:58.090874817 +0000 UTC m=+10.670090051" observedRunningTime="2025-11-05 04:47:58.576660222 +0000 UTC m=+11.155875436" watchObservedRunningTime="2025-11-05 04:47:58.577057814 +0000 UTC m=+11.156273028" Nov 5 04:48:03.607407 sudo[1857]: pam_unix(sudo:session): session closed for user root Nov 5 04:48:03.609619 sshd[1856]: Connection closed by 10.0.0.1 port 59654 Nov 5 04:48:03.612764 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:03.617287 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Nov 5 04:48:03.619757 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:59654.service: Deactivated successfully. Nov 5 04:48:03.623263 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 04:48:03.623650 systemd[1]: session-9.scope: Consumed 6.616s CPU time, 216.7M memory peak. Nov 5 04:48:03.627092 systemd-logind[1609]: Removed session 9. Nov 5 04:48:07.800893 systemd[1]: Created slice kubepods-besteffort-pod9435b58f_e202_40f6_9372_09a46f626b41.slice - libcontainer container kubepods-besteffort-pod9435b58f_e202_40f6_9372_09a46f626b41.slice. Nov 5 04:48:07.822414 kubelet[2811]: I1105 04:48:07.822361 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvgw\" (UniqueName: \"kubernetes.io/projected/9435b58f-e202-40f6-9372-09a46f626b41-kube-api-access-hqvgw\") pod \"calico-typha-7f645b8d69-kxrtl\" (UID: \"9435b58f-e202-40f6-9372-09a46f626b41\") " pod="calico-system/calico-typha-7f645b8d69-kxrtl" Nov 5 04:48:07.822414 kubelet[2811]: I1105 04:48:07.822404 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9435b58f-e202-40f6-9372-09a46f626b41-tigera-ca-bundle\") pod \"calico-typha-7f645b8d69-kxrtl\" (UID: \"9435b58f-e202-40f6-9372-09a46f626b41\") " pod="calico-system/calico-typha-7f645b8d69-kxrtl" Nov 5 04:48:07.822414 kubelet[2811]: I1105 04:48:07.822422 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9435b58f-e202-40f6-9372-09a46f626b41-typha-certs\") pod \"calico-typha-7f645b8d69-kxrtl\" (UID: \"9435b58f-e202-40f6-9372-09a46f626b41\") " pod="calico-system/calico-typha-7f645b8d69-kxrtl" Nov 5 04:48:07.977288 systemd[1]: Created slice kubepods-besteffort-podd413ebc6_2f4d_4cb3_9ac5_9d0bf1b018af.slice - libcontainer container kubepods-besteffort-podd413ebc6_2f4d_4cb3_9ac5_9d0bf1b018af.slice. Nov 5 04:48:08.023611 kubelet[2811]: I1105 04:48:08.023553 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-cni-bin-dir\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023611 kubelet[2811]: I1105 04:48:08.023587 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-node-certs\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023611 kubelet[2811]: I1105 04:48:08.023602 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-cni-log-dir\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023611 kubelet[2811]: I1105 04:48:08.023618 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-policysync\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023815 kubelet[2811]: I1105 04:48:08.023695 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-var-run-calico\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023815 kubelet[2811]: I1105 04:48:08.023751 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-lib-modules\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023932 kubelet[2811]: I1105 04:48:08.023813 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-flexvol-driver-host\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023932 kubelet[2811]: I1105 04:48:08.023867 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-cni-net-dir\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023932 kubelet[2811]: I1105 04:48:08.023886 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-xtables-lock\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023932 kubelet[2811]: I1105 04:48:08.023903 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzrd\" (UniqueName: \"kubernetes.io/projected/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-kube-api-access-ztzrd\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.023932 kubelet[2811]: I1105 04:48:08.023925 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-tigera-ca-bundle\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.024088 kubelet[2811]: I1105 04:48:08.023943 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af-var-lib-calico\") pod \"calico-node-d5252\" (UID: \"d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af\") " pod="calico-system/calico-node-d5252" Nov 5 04:48:08.104192 kubelet[2811]: E1105 04:48:08.103953 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:08.105037 containerd[1641]: time="2025-11-05T04:48:08.104986648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f645b8d69-kxrtl,Uid:9435b58f-e202-40f6-9372-09a46f626b41,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:08.134100 kubelet[2811]: E1105 04:48:08.134056 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.134100 kubelet[2811]: W1105 04:48:08.134077 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.134263 kubelet[2811]: E1105 04:48:08.134110 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.134610 containerd[1641]: time="2025-11-05T04:48:08.134570836Z" level=info msg="connecting to shim bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682" address="unix:///run/containerd/s/f93c6dcdd25e9a06d4a122ae998850be569b5907afead4c1b3777c3fe7d6289c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:08.144741 kubelet[2811]: E1105 04:48:08.143436 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.144741 kubelet[2811]: W1105 04:48:08.143457 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.144741 kubelet[2811]: E1105 04:48:08.143691 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.163168 systemd[1]: Started cri-containerd-bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682.scope - libcontainer container bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682. Nov 5 04:48:08.173115 kubelet[2811]: E1105 04:48:08.172820 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:08.216685 kubelet[2811]: E1105 04:48:08.216644 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.216685 kubelet[2811]: W1105 04:48:08.216668 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.216685 kubelet[2811]: E1105 04:48:08.216692 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.217037 kubelet[2811]: E1105 04:48:08.217008 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.217067 kubelet[2811]: W1105 04:48:08.217050 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.217067 kubelet[2811]: E1105 04:48:08.217062 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.217300 kubelet[2811]: E1105 04:48:08.217282 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.217300 kubelet[2811]: W1105 04:48:08.217294 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.217350 kubelet[2811]: E1105 04:48:08.217306 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.217646 kubelet[2811]: E1105 04:48:08.217622 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.217646 kubelet[2811]: W1105 04:48:08.217635 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.217646 kubelet[2811]: E1105 04:48:08.217645 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.218027 kubelet[2811]: E1105 04:48:08.217965 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.218027 kubelet[2811]: W1105 04:48:08.218022 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.218092 kubelet[2811]: E1105 04:48:08.218034 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.218263 kubelet[2811]: E1105 04:48:08.218238 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.218263 kubelet[2811]: W1105 04:48:08.218252 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.218263 kubelet[2811]: E1105 04:48:08.218261 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.218494 kubelet[2811]: E1105 04:48:08.218479 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.218494 kubelet[2811]: W1105 04:48:08.218490 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.218549 kubelet[2811]: E1105 04:48:08.218498 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.218685 kubelet[2811]: E1105 04:48:08.218668 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.218685 kubelet[2811]: W1105 04:48:08.218681 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.218734 kubelet[2811]: E1105 04:48:08.218690 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.218895 kubelet[2811]: E1105 04:48:08.218879 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.218895 kubelet[2811]: W1105 04:48:08.218891 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.218942 kubelet[2811]: E1105 04:48:08.218900 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.219123 kubelet[2811]: E1105 04:48:08.219106 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.219123 kubelet[2811]: W1105 04:48:08.219118 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.219205 kubelet[2811]: E1105 04:48:08.219128 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.219640 kubelet[2811]: E1105 04:48:08.219416 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.219640 kubelet[2811]: W1105 04:48:08.219431 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.219640 kubelet[2811]: E1105 04:48:08.219452 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.219833 kubelet[2811]: E1105 04:48:08.219809 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.219833 kubelet[2811]: W1105 04:48:08.219828 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.219918 kubelet[2811]: E1105 04:48:08.219839 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.220217 kubelet[2811]: E1105 04:48:08.220199 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.220217 kubelet[2811]: W1105 04:48:08.220212 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.220301 kubelet[2811]: E1105 04:48:08.220222 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.220851 kubelet[2811]: E1105 04:48:08.220831 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.220851 kubelet[2811]: W1105 04:48:08.220846 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.220926 kubelet[2811]: E1105 04:48:08.220859 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.221346 kubelet[2811]: E1105 04:48:08.221312 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.221346 kubelet[2811]: W1105 04:48:08.221326 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.221346 kubelet[2811]: E1105 04:48:08.221337 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.223072 kubelet[2811]: E1105 04:48:08.222997 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.223072 kubelet[2811]: W1105 04:48:08.223022 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.223072 kubelet[2811]: E1105 04:48:08.223032 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.223618 kubelet[2811]: E1105 04:48:08.223570 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.223618 kubelet[2811]: W1105 04:48:08.223581 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.223618 kubelet[2811]: E1105 04:48:08.223614 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.224245 kubelet[2811]: E1105 04:48:08.224227 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.224245 kubelet[2811]: W1105 04:48:08.224241 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.224302 kubelet[2811]: E1105 04:48:08.224252 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.224503 kubelet[2811]: E1105 04:48:08.224472 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.224503 kubelet[2811]: W1105 04:48:08.224499 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.224572 kubelet[2811]: E1105 04:48:08.224527 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.224765 kubelet[2811]: E1105 04:48:08.224750 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.224765 kubelet[2811]: W1105 04:48:08.224760 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.224835 kubelet[2811]: E1105 04:48:08.224770 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.224867 containerd[1641]: time="2025-11-05T04:48:08.224804972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f645b8d69-kxrtl,Uid:9435b58f-e202-40f6-9372-09a46f626b41,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682\"" Nov 5 04:48:08.226438 kubelet[2811]: E1105 04:48:08.226405 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.226438 kubelet[2811]: W1105 04:48:08.226420 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.226438 kubelet[2811]: E1105 04:48:08.226430 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.226438 kubelet[2811]: I1105 04:48:08.226457 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjt8\" (UniqueName: \"kubernetes.io/projected/84d99f8c-4e0f-4dac-8f92-d3c8b82ac971-kube-api-access-zwjt8\") pod \"csi-node-driver-dkcnl\" (UID: \"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971\") " pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:08.226673 kubelet[2811]: E1105 04:48:08.226656 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.226673 kubelet[2811]: W1105 04:48:08.226667 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.226719 kubelet[2811]: E1105 04:48:08.226677 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.226719 kubelet[2811]: I1105 04:48:08.226698 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84d99f8c-4e0f-4dac-8f92-d3c8b82ac971-socket-dir\") pod \"csi-node-driver-dkcnl\" (UID: \"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971\") " pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:08.227001 kubelet[2811]: E1105 04:48:08.226962 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.227001 kubelet[2811]: W1105 04:48:08.226998 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.227080 kubelet[2811]: E1105 04:48:08.227029 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.227274 kubelet[2811]: E1105 04:48:08.227246 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.227274 kubelet[2811]: W1105 04:48:08.227258 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.227274 kubelet[2811]: E1105 04:48:08.227270 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.227523 kubelet[2811]: E1105 04:48:08.227503 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.227523 kubelet[2811]: W1105 04:48:08.227521 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.227568 kubelet[2811]: E1105 04:48:08.227533 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.227568 kubelet[2811]: I1105 04:48:08.227558 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/84d99f8c-4e0f-4dac-8f92-d3c8b82ac971-varrun\") pod \"csi-node-driver-dkcnl\" (UID: \"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971\") " pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:08.227849 kubelet[2811]: E1105 04:48:08.227820 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.227888 kubelet[2811]: W1105 04:48:08.227847 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.227888 kubelet[2811]: E1105 04:48:08.227876 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.228108 kubelet[2811]: E1105 04:48:08.228091 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.228108 kubelet[2811]: W1105 04:48:08.228102 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.228160 kubelet[2811]: E1105 04:48:08.228111 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.228355 kubelet[2811]: E1105 04:48:08.228335 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.228355 kubelet[2811]: W1105 04:48:08.228347 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.228355 kubelet[2811]: E1105 04:48:08.228355 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.228461 kubelet[2811]: I1105 04:48:08.228417 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d99f8c-4e0f-4dac-8f92-d3c8b82ac971-kubelet-dir\") pod \"csi-node-driver-dkcnl\" (UID: \"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971\") " pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:08.228615 kubelet[2811]: E1105 04:48:08.228588 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:08.228773 kubelet[2811]: E1105 04:48:08.228749 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.228773 kubelet[2811]: W1105 04:48:08.228765 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.228892 kubelet[2811]: E1105 04:48:08.228777 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.229279 kubelet[2811]: E1105 04:48:08.229232 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.229279 kubelet[2811]: W1105 04:48:08.229245 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.229279 kubelet[2811]: E1105 04:48:08.229255 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.229489 kubelet[2811]: E1105 04:48:08.229440 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.229489 kubelet[2811]: W1105 04:48:08.229448 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.229489 kubelet[2811]: E1105 04:48:08.229456 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.229489 kubelet[2811]: I1105 04:48:08.229483 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84d99f8c-4e0f-4dac-8f92-d3c8b82ac971-registration-dir\") pod \"csi-node-driver-dkcnl\" (UID: \"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971\") " pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:08.229799 containerd[1641]: time="2025-11-05T04:48:08.229442781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 04:48:08.229837 kubelet[2811]: E1105 04:48:08.229682 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.229837 kubelet[2811]: W1105 04:48:08.229701 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.229837 kubelet[2811]: E1105 04:48:08.229710 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.230094 kubelet[2811]: E1105 04:48:08.230065 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.230094 kubelet[2811]: W1105 04:48:08.230086 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.230094 kubelet[2811]: E1105 04:48:08.230102 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.231161 kubelet[2811]: E1105 04:48:08.231138 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.231161 kubelet[2811]: W1105 04:48:08.231154 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.231276 kubelet[2811]: E1105 04:48:08.231166 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.231413 kubelet[2811]: E1105 04:48:08.231391 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.231413 kubelet[2811]: W1105 04:48:08.231405 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.231413 kubelet[2811]: E1105 04:48:08.231414 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.281475 kubelet[2811]: E1105 04:48:08.281431 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:08.282118 containerd[1641]: time="2025-11-05T04:48:08.282057840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5252,Uid:d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:08.322253 containerd[1641]: time="2025-11-05T04:48:08.322196914Z" level=info msg="connecting to shim 081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa" address="unix:///run/containerd/s/f368fde4b190ef049797450f359a60bf8e0f1c5781a694185517c1b6daa3e4f1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:08.330625 kubelet[2811]: E1105 04:48:08.330564 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.330625 kubelet[2811]: W1105 04:48:08.330613 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.330625 kubelet[2811]: E1105 04:48:08.330632 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.331033 kubelet[2811]: E1105 04:48:08.331007 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.331033 kubelet[2811]: W1105 04:48:08.331028 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.331135 kubelet[2811]: E1105 04:48:08.331039 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.331938 kubelet[2811]: E1105 04:48:08.331871 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.331938 kubelet[2811]: W1105 04:48:08.331885 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.331938 kubelet[2811]: E1105 04:48:08.331895 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.332212 kubelet[2811]: E1105 04:48:08.332174 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.332274 kubelet[2811]: W1105 04:48:08.332209 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.332274 kubelet[2811]: E1105 04:48:08.332255 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.332470 kubelet[2811]: E1105 04:48:08.332455 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.332470 kubelet[2811]: W1105 04:48:08.332465 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.332572 kubelet[2811]: E1105 04:48:08.332475 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.333141 kubelet[2811]: E1105 04:48:08.333122 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.333141 kubelet[2811]: W1105 04:48:08.333135 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.333141 kubelet[2811]: E1105 04:48:08.333146 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.333478 kubelet[2811]: E1105 04:48:08.333462 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.333478 kubelet[2811]: W1105 04:48:08.333476 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.333478 kubelet[2811]: E1105 04:48:08.333487 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.333877 kubelet[2811]: E1105 04:48:08.333862 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.333877 kubelet[2811]: W1105 04:48:08.333873 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.333981 kubelet[2811]: E1105 04:48:08.333883 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.335292 kubelet[2811]: E1105 04:48:08.335275 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.335292 kubelet[2811]: W1105 04:48:08.335288 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.335379 kubelet[2811]: E1105 04:48:08.335298 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.335517 kubelet[2811]: E1105 04:48:08.335503 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.335517 kubelet[2811]: W1105 04:48:08.335514 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.335601 kubelet[2811]: E1105 04:48:08.335524 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.335802 kubelet[2811]: E1105 04:48:08.335775 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.335802 kubelet[2811]: W1105 04:48:08.335786 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.335802 kubelet[2811]: E1105 04:48:08.335796 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.336067 kubelet[2811]: E1105 04:48:08.336040 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.336067 kubelet[2811]: W1105 04:48:08.336052 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.336067 kubelet[2811]: E1105 04:48:08.336062 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.336339 kubelet[2811]: E1105 04:48:08.336327 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.336339 kubelet[2811]: W1105 04:48:08.336339 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.336412 kubelet[2811]: E1105 04:48:08.336349 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.336586 kubelet[2811]: E1105 04:48:08.336571 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.336586 kubelet[2811]: W1105 04:48:08.336583 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.336659 kubelet[2811]: E1105 04:48:08.336592 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.336810 kubelet[2811]: E1105 04:48:08.336795 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.336810 kubelet[2811]: W1105 04:48:08.336806 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.336890 kubelet[2811]: E1105 04:48:08.336816 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.337114 kubelet[2811]: E1105 04:48:08.337094 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.337173 kubelet[2811]: W1105 04:48:08.337106 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.337259 kubelet[2811]: E1105 04:48:08.337243 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.338245 kubelet[2811]: E1105 04:48:08.338232 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.338245 kubelet[2811]: W1105 04:48:08.338244 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.338332 kubelet[2811]: E1105 04:48:08.338255 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.340125 kubelet[2811]: E1105 04:48:08.340087 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.340202 kubelet[2811]: W1105 04:48:08.340123 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.340202 kubelet[2811]: E1105 04:48:08.340154 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.340458 kubelet[2811]: E1105 04:48:08.340429 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.340458 kubelet[2811]: W1105 04:48:08.340445 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.340458 kubelet[2811]: E1105 04:48:08.340454 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.340723 kubelet[2811]: E1105 04:48:08.340706 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.340804 kubelet[2811]: W1105 04:48:08.340787 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.340905 kubelet[2811]: E1105 04:48:08.340890 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.341369 kubelet[2811]: E1105 04:48:08.341209 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.341369 kubelet[2811]: W1105 04:48:08.341222 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.341369 kubelet[2811]: E1105 04:48:08.341234 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.341605 kubelet[2811]: E1105 04:48:08.341586 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.341711 kubelet[2811]: W1105 04:48:08.341688 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.341818 kubelet[2811]: E1105 04:48:08.341797 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.342248 kubelet[2811]: E1105 04:48:08.342232 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.342323 kubelet[2811]: W1105 04:48:08.342309 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.342403 kubelet[2811]: E1105 04:48:08.342388 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.342879 kubelet[2811]: E1105 04:48:08.342838 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.342879 kubelet[2811]: W1105 04:48:08.342853 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.342879 kubelet[2811]: E1105 04:48:08.342865 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.343745 kubelet[2811]: E1105 04:48:08.343730 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.343819 kubelet[2811]: W1105 04:48:08.343805 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.343890 kubelet[2811]: E1105 04:48:08.343878 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.350999 kubelet[2811]: E1105 04:48:08.350967 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:08.351141 kubelet[2811]: W1105 04:48:08.351091 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:08.351141 kubelet[2811]: E1105 04:48:08.351114 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:08.352141 systemd[1]: Started cri-containerd-081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa.scope - libcontainer container 081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa. Nov 5 04:48:08.381351 containerd[1641]: time="2025-11-05T04:48:08.381220768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5252,Uid:d413ebc6-2f4d-4cb3-9ac5-9d0bf1b018af,Namespace:calico-system,Attempt:0,} returns sandbox id \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\"" Nov 5 04:48:08.385217 kubelet[2811]: E1105 04:48:08.385053 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:09.715296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495969350.mount: Deactivated successfully. Nov 5 04:48:10.156852 containerd[1641]: time="2025-11-05T04:48:10.156788043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:10.157652 containerd[1641]: time="2025-11-05T04:48:10.157608117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 5 04:48:10.158622 containerd[1641]: time="2025-11-05T04:48:10.158587421Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:10.160791 containerd[1641]: time="2025-11-05T04:48:10.160752558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:10.161508 containerd[1641]: time="2025-11-05T04:48:10.161474207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.931991251s" Nov 5 04:48:10.161561 containerd[1641]: time="2025-11-05T04:48:10.161507810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 04:48:10.165348 containerd[1641]: time="2025-11-05T04:48:10.165270906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 04:48:10.178684 containerd[1641]: time="2025-11-05T04:48:10.178639348Z" level=info msg="CreateContainer within sandbox \"bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 04:48:10.186135 containerd[1641]: time="2025-11-05T04:48:10.186079869Z" level=info msg="Container ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:10.194405 containerd[1641]: time="2025-11-05T04:48:10.194345944Z" level=info msg="CreateContainer within sandbox \"bb1b8903ed1a7699529a403d30c8bbb86412d0785b6495cb51a0d8dbb5ea8682\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f\"" Nov 5 04:48:10.194945 containerd[1641]: time="2025-11-05T04:48:10.194916028Z" level=info msg="StartContainer for \"ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f\"" Nov 5 04:48:10.196020 containerd[1641]: time="2025-11-05T04:48:10.195959382Z" level=info msg="connecting to shim ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f" address="unix:///run/containerd/s/f93c6dcdd25e9a06d4a122ae998850be569b5907afead4c1b3777c3fe7d6289c" protocol=ttrpc version=3 Nov 5 04:48:10.222159 systemd[1]: Started cri-containerd-ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f.scope - libcontainer container ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f. Nov 5 04:48:10.298486 containerd[1641]: time="2025-11-05T04:48:10.298424303Z" level=info msg="StartContainer for \"ae9c2156d5ad1190b47a8cb149506d6829333504f57c06880ea1a575e2ce9e9f\" returns successfully" Nov 5 04:48:10.527780 kubelet[2811]: E1105 04:48:10.527730 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:10.597533 kubelet[2811]: E1105 04:48:10.597487 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:10.643373 kubelet[2811]: E1105 04:48:10.643316 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.643373 kubelet[2811]: W1105 04:48:10.643350 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.643373 kubelet[2811]: E1105 04:48:10.643382 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.643661 kubelet[2811]: E1105 04:48:10.643636 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.643661 kubelet[2811]: W1105 04:48:10.643649 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.643661 kubelet[2811]: E1105 04:48:10.643658 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.643833 kubelet[2811]: E1105 04:48:10.643819 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.643833 kubelet[2811]: W1105 04:48:10.643829 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.643886 kubelet[2811]: E1105 04:48:10.643838 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.644134 kubelet[2811]: E1105 04:48:10.644117 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.644134 kubelet[2811]: W1105 04:48:10.644129 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.644203 kubelet[2811]: E1105 04:48:10.644139 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.644336 kubelet[2811]: E1105 04:48:10.644321 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.644336 kubelet[2811]: W1105 04:48:10.644331 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.644385 kubelet[2811]: E1105 04:48:10.644339 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.644509 kubelet[2811]: E1105 04:48:10.644495 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.644509 kubelet[2811]: W1105 04:48:10.644506 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.644562 kubelet[2811]: E1105 04:48:10.644514 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.644690 kubelet[2811]: E1105 04:48:10.644676 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.644690 kubelet[2811]: W1105 04:48:10.644686 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.644756 kubelet[2811]: E1105 04:48:10.644695 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.644869 kubelet[2811]: E1105 04:48:10.644855 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.644869 kubelet[2811]: W1105 04:48:10.644865 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.644924 kubelet[2811]: E1105 04:48:10.644875 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.645086 kubelet[2811]: E1105 04:48:10.645071 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.645086 kubelet[2811]: W1105 04:48:10.645081 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.645141 kubelet[2811]: E1105 04:48:10.645090 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.645269 kubelet[2811]: E1105 04:48:10.645255 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.645269 kubelet[2811]: W1105 04:48:10.645265 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.645320 kubelet[2811]: E1105 04:48:10.645273 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.645444 kubelet[2811]: E1105 04:48:10.645430 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.645444 kubelet[2811]: W1105 04:48:10.645440 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.645493 kubelet[2811]: E1105 04:48:10.645448 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.645615 kubelet[2811]: E1105 04:48:10.645601 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.645615 kubelet[2811]: W1105 04:48:10.645611 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.645669 kubelet[2811]: E1105 04:48:10.645619 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.645797 kubelet[2811]: E1105 04:48:10.645784 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.645797 kubelet[2811]: W1105 04:48:10.645794 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.645843 kubelet[2811]: E1105 04:48:10.645801 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.646002 kubelet[2811]: E1105 04:48:10.645958 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.646002 kubelet[2811]: W1105 04:48:10.645989 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.646002 kubelet[2811]: E1105 04:48:10.645997 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.646180 kubelet[2811]: E1105 04:48:10.646165 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.646180 kubelet[2811]: W1105 04:48:10.646176 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.646236 kubelet[2811]: E1105 04:48:10.646184 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.651591 kubelet[2811]: E1105 04:48:10.651560 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.651591 kubelet[2811]: W1105 04:48:10.651583 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.651715 kubelet[2811]: E1105 04:48:10.651603 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.651849 kubelet[2811]: E1105 04:48:10.651824 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.651849 kubelet[2811]: W1105 04:48:10.651839 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.651899 kubelet[2811]: E1105 04:48:10.651850 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.652171 kubelet[2811]: E1105 04:48:10.652137 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.652171 kubelet[2811]: W1105 04:48:10.652162 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.652232 kubelet[2811]: E1105 04:48:10.652186 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.652389 kubelet[2811]: E1105 04:48:10.652369 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.652389 kubelet[2811]: W1105 04:48:10.652379 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.652436 kubelet[2811]: E1105 04:48:10.652389 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.652588 kubelet[2811]: E1105 04:48:10.652573 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.652588 kubelet[2811]: W1105 04:48:10.652583 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.652649 kubelet[2811]: E1105 04:48:10.652591 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.652808 kubelet[2811]: E1105 04:48:10.652787 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.652808 kubelet[2811]: W1105 04:48:10.652797 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.652808 kubelet[2811]: E1105 04:48:10.652805 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.653123 kubelet[2811]: E1105 04:48:10.653103 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.653123 kubelet[2811]: W1105 04:48:10.653115 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.653123 kubelet[2811]: E1105 04:48:10.653124 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.653342 kubelet[2811]: E1105 04:48:10.653324 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.653342 kubelet[2811]: W1105 04:48:10.653334 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.653342 kubelet[2811]: E1105 04:48:10.653343 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.653532 kubelet[2811]: E1105 04:48:10.653515 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.653532 kubelet[2811]: W1105 04:48:10.653525 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.653594 kubelet[2811]: E1105 04:48:10.653534 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.653716 kubelet[2811]: E1105 04:48:10.653699 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.653716 kubelet[2811]: W1105 04:48:10.653709 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.653766 kubelet[2811]: E1105 04:48:10.653717 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.653915 kubelet[2811]: E1105 04:48:10.653898 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.653915 kubelet[2811]: W1105 04:48:10.653909 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.653988 kubelet[2811]: E1105 04:48:10.653917 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.654126 kubelet[2811]: E1105 04:48:10.654108 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.654126 kubelet[2811]: W1105 04:48:10.654119 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.654175 kubelet[2811]: E1105 04:48:10.654127 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.654344 kubelet[2811]: E1105 04:48:10.654326 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.654344 kubelet[2811]: W1105 04:48:10.654337 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.654395 kubelet[2811]: E1105 04:48:10.654345 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.654630 kubelet[2811]: E1105 04:48:10.654603 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.654630 kubelet[2811]: W1105 04:48:10.654615 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.654630 kubelet[2811]: E1105 04:48:10.654625 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.654815 kubelet[2811]: E1105 04:48:10.654800 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.654815 kubelet[2811]: W1105 04:48:10.654810 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.654870 kubelet[2811]: E1105 04:48:10.654818 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.655043 kubelet[2811]: E1105 04:48:10.655027 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.655043 kubelet[2811]: W1105 04:48:10.655037 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.655098 kubelet[2811]: E1105 04:48:10.655047 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.655318 kubelet[2811]: E1105 04:48:10.655298 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.655318 kubelet[2811]: W1105 04:48:10.655310 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.655378 kubelet[2811]: E1105 04:48:10.655319 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:10.655526 kubelet[2811]: E1105 04:48:10.655509 2811 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:48:10.655526 kubelet[2811]: W1105 04:48:10.655519 2811 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:48:10.655572 kubelet[2811]: E1105 04:48:10.655527 2811 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:48:11.417999 containerd[1641]: time="2025-11-05T04:48:11.417932418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:11.418772 containerd[1641]: time="2025-11-05T04:48:11.418741802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:11.419930 containerd[1641]: time="2025-11-05T04:48:11.419870436Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:11.422711 containerd[1641]: time="2025-11-05T04:48:11.422673062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:11.423166 containerd[1641]: time="2025-11-05T04:48:11.423126245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.257820694s" Nov 5 04:48:11.423166 containerd[1641]: time="2025-11-05T04:48:11.423160400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 04:48:11.426696 containerd[1641]: time="2025-11-05T04:48:11.426660138Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 04:48:11.434909 containerd[1641]: time="2025-11-05T04:48:11.434871016Z" level=info msg="Container b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:11.445308 containerd[1641]: time="2025-11-05T04:48:11.445262597Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d\"" Nov 5 04:48:11.446040 containerd[1641]: time="2025-11-05T04:48:11.445893165Z" level=info msg="StartContainer for \"b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d\"" Nov 5 04:48:11.448219 containerd[1641]: time="2025-11-05T04:48:11.448185179Z" level=info msg="connecting to shim b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d" address="unix:///run/containerd/s/f368fde4b190ef049797450f359a60bf8e0f1c5781a694185517c1b6daa3e4f1" protocol=ttrpc version=3 Nov 5 04:48:11.475121 systemd[1]: Started cri-containerd-b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d.scope - libcontainer container b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d. Nov 5 04:48:11.525047 containerd[1641]: time="2025-11-05T04:48:11.524961392Z" level=info msg="StartContainer for \"b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d\" returns successfully" Nov 5 04:48:11.537033 systemd[1]: cri-containerd-b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d.scope: Deactivated successfully. Nov 5 04:48:11.539679 containerd[1641]: time="2025-11-05T04:48:11.539631249Z" level=info msg="received exit event container_id:\"b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d\" id:\"b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d\" pid:3522 exited_at:{seconds:1762318091 nanos:538845059}" Nov 5 04:48:11.570251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b72bbf30ca45bb241aa8af65cf88dca9672441e77030dde8f33300f65aef006d-rootfs.mount: Deactivated successfully. Nov 5 04:48:11.602054 kubelet[2811]: I1105 04:48:11.601394 2811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 04:48:11.602054 kubelet[2811]: E1105 04:48:11.601741 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:11.602054 kubelet[2811]: E1105 04:48:11.601858 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:11.763230 kubelet[2811]: I1105 04:48:11.762177 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f645b8d69-kxrtl" podStartSLOduration=2.8261345049999997 podStartE2EDuration="4.762160014s" podCreationTimestamp="2025-11-05 04:48:07 +0000 UTC" firstStartedPulling="2025-11-05 04:48:08.229062403 +0000 UTC m=+20.808277617" lastFinishedPulling="2025-11-05 04:48:10.165087912 +0000 UTC m=+22.744303126" observedRunningTime="2025-11-05 04:48:10.605178301 +0000 UTC m=+23.184393515" watchObservedRunningTime="2025-11-05 04:48:11.762160014 +0000 UTC m=+24.341375228" Nov 5 04:48:12.527965 kubelet[2811]: E1105 04:48:12.527885 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:12.605792 kubelet[2811]: E1105 04:48:12.605727 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:12.606546 containerd[1641]: time="2025-11-05T04:48:12.606504651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 04:48:14.528356 kubelet[2811]: E1105 04:48:14.528074 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:15.279743 containerd[1641]: time="2025-11-05T04:48:15.279634345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:15.280528 containerd[1641]: time="2025-11-05T04:48:15.280466701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 5 04:48:15.281613 containerd[1641]: time="2025-11-05T04:48:15.281568203Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:15.283847 containerd[1641]: time="2025-11-05T04:48:15.283779211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:15.284511 containerd[1641]: time="2025-11-05T04:48:15.284475331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.677925185s" Nov 5 04:48:15.284511 containerd[1641]: time="2025-11-05T04:48:15.284507180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 04:48:15.288717 containerd[1641]: time="2025-11-05T04:48:15.288565244Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 04:48:15.297394 containerd[1641]: time="2025-11-05T04:48:15.297328987Z" level=info msg="Container f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:15.307074 containerd[1641]: time="2025-11-05T04:48:15.307035564Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8\"" Nov 5 04:48:15.307530 containerd[1641]: time="2025-11-05T04:48:15.307364914Z" level=info msg="StartContainer for \"f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8\"" Nov 5 04:48:15.308866 containerd[1641]: time="2025-11-05T04:48:15.308800564Z" level=info msg="connecting to shim f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8" address="unix:///run/containerd/s/f368fde4b190ef049797450f359a60bf8e0f1c5781a694185517c1b6daa3e4f1" protocol=ttrpc version=3 Nov 5 04:48:15.341131 systemd[1]: Started cri-containerd-f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8.scope - libcontainer container f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8. Nov 5 04:48:15.740442 containerd[1641]: time="2025-11-05T04:48:15.740404243Z" level=info msg="StartContainer for \"f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8\" returns successfully" Nov 5 04:48:16.527752 kubelet[2811]: E1105 04:48:16.527702 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:16.570642 systemd[1]: cri-containerd-f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8.scope: Deactivated successfully. Nov 5 04:48:16.571097 systemd[1]: cri-containerd-f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8.scope: Consumed 605ms CPU time, 176M memory peak, 3.3M read from disk, 171.3M written to disk. Nov 5 04:48:16.581338 containerd[1641]: time="2025-11-05T04:48:16.581281562Z" level=info msg="received exit event container_id:\"f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8\" id:\"f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8\" pid:3586 exited_at:{seconds:1762318096 nanos:572382136}" Nov 5 04:48:16.606306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0719881d43045c155abad83bc0679a1192ba35bb89e9ce76662bc386938fbf8-rootfs.mount: Deactivated successfully. Nov 5 04:48:16.652316 kubelet[2811]: I1105 04:48:16.652102 2811 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 04:48:16.693798 systemd[1]: Created slice kubepods-besteffort-poddc90951e_eb89_47bd_8fb2_3712d2db3fd5.slice - libcontainer container kubepods-besteffort-poddc90951e_eb89_47bd_8fb2_3712d2db3fd5.slice. Nov 5 04:48:16.701404 systemd[1]: Created slice kubepods-burstable-pod24240dbc_96de_4497_be68_42f6bff10c1e.slice - libcontainer container kubepods-burstable-pod24240dbc_96de_4497_be68_42f6bff10c1e.slice. Nov 5 04:48:16.710996 systemd[1]: Created slice kubepods-besteffort-podc8a4b2e2_09ee_4112_a2df_31acb1eaedf9.slice - libcontainer container kubepods-besteffort-podc8a4b2e2_09ee_4112_a2df_31acb1eaedf9.slice. Nov 5 04:48:16.717671 systemd[1]: Created slice kubepods-besteffort-pod63b223a4_aa0e_4b7a_9e9b_ebfedd74f920.slice - libcontainer container kubepods-besteffort-pod63b223a4_aa0e_4b7a_9e9b_ebfedd74f920.slice. Nov 5 04:48:16.724705 systemd[1]: Created slice kubepods-burstable-pod29ed970c_f253_488e_9357_ef5dd319f30f.slice - libcontainer container kubepods-burstable-pod29ed970c_f253_488e_9357_ef5dd319f30f.slice. Nov 5 04:48:16.732400 systemd[1]: Created slice kubepods-besteffort-podc4bb52fc_9b22_49ea_9b48_e96faa9ad94b.slice - libcontainer container kubepods-besteffort-podc4bb52fc_9b22_49ea_9b48_e96faa9ad94b.slice. Nov 5 04:48:16.737867 systemd[1]: Created slice kubepods-besteffort-podd81093e5_51dd_4f5e_ba7d_dad72d581a2a.slice - libcontainer container kubepods-besteffort-podd81093e5_51dd_4f5e_ba7d_dad72d581a2a.slice. Nov 5 04:48:16.747740 kubelet[2811]: E1105 04:48:16.747666 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:16.750793 containerd[1641]: time="2025-11-05T04:48:16.750731040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 04:48:16.796474 kubelet[2811]: I1105 04:48:16.796308 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24240dbc-96de-4497-be68-42f6bff10c1e-config-volume\") pod \"coredns-674b8bbfcf-cbn9v\" (UID: \"24240dbc-96de-4497-be68-42f6bff10c1e\") " pod="kube-system/coredns-674b8bbfcf-cbn9v" Nov 5 04:48:16.796474 kubelet[2811]: I1105 04:48:16.796374 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pmr\" (UniqueName: \"kubernetes.io/projected/dc90951e-eb89-47bd-8fb2-3712d2db3fd5-kube-api-access-84pmr\") pod \"calico-kube-controllers-654c7d6777-9wfvg\" (UID: \"dc90951e-eb89-47bd-8fb2-3712d2db3fd5\") " pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" Nov 5 04:48:16.796474 kubelet[2811]: I1105 04:48:16.796397 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a4b2e2-09ee-4112-a2df-31acb1eaedf9-config\") pod \"goldmane-666569f655-ms88g\" (UID: \"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9\") " pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:16.796474 kubelet[2811]: I1105 04:48:16.796414 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a4b2e2-09ee-4112-a2df-31acb1eaedf9-goldmane-ca-bundle\") pod \"goldmane-666569f655-ms88g\" (UID: \"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9\") " pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:16.796474 kubelet[2811]: I1105 04:48:16.796430 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c8a4b2e2-09ee-4112-a2df-31acb1eaedf9-goldmane-key-pair\") pod \"goldmane-666569f655-ms88g\" (UID: \"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9\") " pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:16.796735 kubelet[2811]: I1105 04:48:16.796444 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwv6\" (UniqueName: \"kubernetes.io/projected/29ed970c-f253-488e-9357-ef5dd319f30f-kube-api-access-5rwv6\") pod \"coredns-674b8bbfcf-mt92c\" (UID: \"29ed970c-f253-488e-9357-ef5dd319f30f\") " pod="kube-system/coredns-674b8bbfcf-mt92c" Nov 5 04:48:16.796735 kubelet[2811]: I1105 04:48:16.796459 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8pb\" (UniqueName: \"kubernetes.io/projected/24240dbc-96de-4497-be68-42f6bff10c1e-kube-api-access-mw8pb\") pod \"coredns-674b8bbfcf-cbn9v\" (UID: \"24240dbc-96de-4497-be68-42f6bff10c1e\") " pod="kube-system/coredns-674b8bbfcf-cbn9v" Nov 5 04:48:16.796735 kubelet[2811]: I1105 04:48:16.796477 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhdgs\" (UniqueName: \"kubernetes.io/projected/c8a4b2e2-09ee-4112-a2df-31acb1eaedf9-kube-api-access-bhdgs\") pod \"goldmane-666569f655-ms88g\" (UID: \"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9\") " pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:16.796735 kubelet[2811]: I1105 04:48:16.796495 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ed970c-f253-488e-9357-ef5dd319f30f-config-volume\") pod \"coredns-674b8bbfcf-mt92c\" (UID: \"29ed970c-f253-488e-9357-ef5dd319f30f\") " pod="kube-system/coredns-674b8bbfcf-mt92c" Nov 5 04:48:16.796735 kubelet[2811]: I1105 04:48:16.796517 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc90951e-eb89-47bd-8fb2-3712d2db3fd5-tigera-ca-bundle\") pod \"calico-kube-controllers-654c7d6777-9wfvg\" (UID: \"dc90951e-eb89-47bd-8fb2-3712d2db3fd5\") " pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" Nov 5 04:48:16.897488 kubelet[2811]: I1105 04:48:16.897289 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63b223a4-aa0e-4b7a-9e9b-ebfedd74f920-calico-apiserver-certs\") pod \"calico-apiserver-7cd54dc478-5rvkg\" (UID: \"63b223a4-aa0e-4b7a-9e9b-ebfedd74f920\") " pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" Nov 5 04:48:16.897488 kubelet[2811]: I1105 04:48:16.897334 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-backend-key-pair\") pod \"whisker-6589f5f48c-dwx8t\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " pod="calico-system/whisker-6589f5f48c-dwx8t" Nov 5 04:48:16.897488 kubelet[2811]: I1105 04:48:16.897360 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9ww\" (UniqueName: \"kubernetes.io/projected/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-kube-api-access-pz9ww\") pod \"whisker-6589f5f48c-dwx8t\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " pod="calico-system/whisker-6589f5f48c-dwx8t" Nov 5 04:48:16.897488 kubelet[2811]: I1105 04:48:16.897402 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-ca-bundle\") pod \"whisker-6589f5f48c-dwx8t\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " pod="calico-system/whisker-6589f5f48c-dwx8t" Nov 5 04:48:16.898703 kubelet[2811]: I1105 04:48:16.898648 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4n5\" (UniqueName: \"kubernetes.io/projected/63b223a4-aa0e-4b7a-9e9b-ebfedd74f920-kube-api-access-jk4n5\") pod \"calico-apiserver-7cd54dc478-5rvkg\" (UID: \"63b223a4-aa0e-4b7a-9e9b-ebfedd74f920\") " pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" Nov 5 04:48:16.898997 kubelet[2811]: I1105 04:48:16.898957 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d81093e5-51dd-4f5e-ba7d-dad72d581a2a-calico-apiserver-certs\") pod \"calico-apiserver-7cd54dc478-f2jvn\" (UID: \"d81093e5-51dd-4f5e-ba7d-dad72d581a2a\") " pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" Nov 5 04:48:16.899340 kubelet[2811]: I1105 04:48:16.899287 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvkcb\" (UniqueName: \"kubernetes.io/projected/d81093e5-51dd-4f5e-ba7d-dad72d581a2a-kube-api-access-lvkcb\") pod \"calico-apiserver-7cd54dc478-f2jvn\" (UID: \"d81093e5-51dd-4f5e-ba7d-dad72d581a2a\") " pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" Nov 5 04:48:17.303038 containerd[1641]: time="2025-11-05T04:48:17.302993175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654c7d6777-9wfvg,Uid:dc90951e-eb89-47bd-8fb2-3712d2db3fd5,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:17.307305 kubelet[2811]: E1105 04:48:17.307266 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:17.307614 containerd[1641]: time="2025-11-05T04:48:17.307579209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cbn9v,Uid:24240dbc-96de-4497-be68-42f6bff10c1e,Namespace:kube-system,Attempt:0,}" Nov 5 04:48:17.314250 containerd[1641]: time="2025-11-05T04:48:17.314218171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ms88g,Uid:c8a4b2e2-09ee-4112-a2df-31acb1eaedf9,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:17.321841 containerd[1641]: time="2025-11-05T04:48:17.321798834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-5rvkg,Uid:63b223a4-aa0e-4b7a-9e9b-ebfedd74f920,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:48:17.330241 kubelet[2811]: E1105 04:48:17.330203 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:17.330546 containerd[1641]: time="2025-11-05T04:48:17.330514411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt92c,Uid:29ed970c-f253-488e-9357-ef5dd319f30f,Namespace:kube-system,Attempt:0,}" Nov 5 04:48:17.336057 containerd[1641]: time="2025-11-05T04:48:17.336026936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6589f5f48c-dwx8t,Uid:c4bb52fc-9b22-49ea-9b48-e96faa9ad94b,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:17.341573 containerd[1641]: time="2025-11-05T04:48:17.341535323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-f2jvn,Uid:d81093e5-51dd-4f5e-ba7d-dad72d581a2a,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:48:18.087657 containerd[1641]: time="2025-11-05T04:48:18.087458578Z" level=error msg="Failed to destroy network for sandbox \"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.092379 containerd[1641]: time="2025-11-05T04:48:18.092310459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt92c,Uid:29ed970c-f253-488e-9357-ef5dd319f30f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.092985 kubelet[2811]: E1105 04:48:18.092889 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.093319 kubelet[2811]: E1105 04:48:18.092984 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt92c" Nov 5 04:48:18.093319 kubelet[2811]: E1105 04:48:18.093024 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mt92c" Nov 5 04:48:18.093319 kubelet[2811]: E1105 04:48:18.093091 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mt92c_kube-system(29ed970c-f253-488e-9357-ef5dd319f30f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mt92c_kube-system(29ed970c-f253-488e-9357-ef5dd319f30f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea1a85d7e1d3e2102755a673299121c247d71fab8b7671148d25c987447be59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mt92c" podUID="29ed970c-f253-488e-9357-ef5dd319f30f" Nov 5 04:48:18.093664 containerd[1641]: time="2025-11-05T04:48:18.093608068Z" level=error msg="Failed to destroy network for sandbox \"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.102269 containerd[1641]: time="2025-11-05T04:48:18.102224858Z" level=error msg="Failed to destroy network for sandbox \"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.104387 containerd[1641]: time="2025-11-05T04:48:18.104058214Z" level=error msg="Failed to destroy network for sandbox \"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.104602 containerd[1641]: time="2025-11-05T04:48:18.104335094Z" level=error msg="Failed to destroy network for sandbox \"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.106930 containerd[1641]: time="2025-11-05T04:48:18.106834052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cbn9v,Uid:24240dbc-96de-4497-be68-42f6bff10c1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.107196 kubelet[2811]: E1105 04:48:18.107131 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.107292 kubelet[2811]: E1105 04:48:18.107215 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cbn9v" Nov 5 04:48:18.107292 kubelet[2811]: E1105 04:48:18.107239 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cbn9v" Nov 5 04:48:18.107351 kubelet[2811]: E1105 04:48:18.107294 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cbn9v_kube-system(24240dbc-96de-4497-be68-42f6bff10c1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cbn9v_kube-system(24240dbc-96de-4497-be68-42f6bff10c1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac02f1963c88cb914a43b10b792f8137e8c44cd18ad4a0d86ed0308d865ee01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cbn9v" podUID="24240dbc-96de-4497-be68-42f6bff10c1e" Nov 5 04:48:18.107854 containerd[1641]: time="2025-11-05T04:48:18.107747870Z" level=error msg="Failed to destroy network for sandbox \"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.108027 containerd[1641]: time="2025-11-05T04:48:18.107995295Z" level=error msg="Failed to destroy network for sandbox \"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128226 containerd[1641]: time="2025-11-05T04:48:18.108960479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ms88g,Uid:c8a4b2e2-09ee-4112-a2df-31acb1eaedf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128419 containerd[1641]: time="2025-11-05T04:48:18.110774811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654c7d6777-9wfvg,Uid:dc90951e-eb89-47bd-8fb2-3712d2db3fd5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128419 containerd[1641]: time="2025-11-05T04:48:18.113635368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6589f5f48c-dwx8t,Uid:c4bb52fc-9b22-49ea-9b48-e96faa9ad94b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128419 containerd[1641]: time="2025-11-05T04:48:18.115313142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-5rvkg,Uid:63b223a4-aa0e-4b7a-9e9b-ebfedd74f920,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128538 containerd[1641]: time="2025-11-05T04:48:18.114494173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-f2jvn,Uid:d81093e5-51dd-4f5e-ba7d-dad72d581a2a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128577 kubelet[2811]: E1105 04:48:18.128546 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128625 kubelet[2811]: E1105 04:48:18.128564 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128625 kubelet[2811]: E1105 04:48:18.128604 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" Nov 5 04:48:18.128678 kubelet[2811]: E1105 04:48:18.128626 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" Nov 5 04:48:18.128678 kubelet[2811]: E1105 04:48:18.128650 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:18.128723 kubelet[2811]: E1105 04:48:18.128678 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ms88g" Nov 5 04:48:18.128723 kubelet[2811]: E1105 04:48:18.128681 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd54dc478-f2jvn_calico-apiserver(d81093e5-51dd-4f5e-ba7d-dad72d581a2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd54dc478-f2jvn_calico-apiserver(d81093e5-51dd-4f5e-ba7d-dad72d581a2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6980d8f73b395d60845e19541868b3901fbf28018524e333c699e25712d8f036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:48:18.128791 kubelet[2811]: E1105 04:48:18.128738 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ms88g_calico-system(c8a4b2e2-09ee-4112-a2df-31acb1eaedf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ms88g_calico-system(c8a4b2e2-09ee-4112-a2df-31acb1eaedf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"243f54d6d9cb1da0cb07f65ff35f1d0847aa4943253c8d89d6782d48d474064a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:48:18.128791 kubelet[2811]: E1105 04:48:18.128787 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.128868 kubelet[2811]: E1105 04:48:18.128813 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" Nov 5 04:48:18.128895 kubelet[2811]: E1105 04:48:18.128826 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" Nov 5 04:48:18.128924 kubelet[2811]: E1105 04:48:18.128893 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd54dc478-5rvkg_calico-apiserver(63b223a4-aa0e-4b7a-9e9b-ebfedd74f920)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd54dc478-5rvkg_calico-apiserver(63b223a4-aa0e-4b7a-9e9b-ebfedd74f920)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a57aeb1ddedea520d7d84fb1ba58cc4105055b296bb931b2fdd239bcebcf8be6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:48:18.129305 kubelet[2811]: E1105 04:48:18.128935 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.129305 kubelet[2811]: E1105 04:48:18.128935 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.129305 kubelet[2811]: E1105 04:48:18.128954 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" Nov 5 04:48:18.129305 kubelet[2811]: E1105 04:48:18.128960 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6589f5f48c-dwx8t" Nov 5 04:48:18.129401 kubelet[2811]: E1105 04:48:18.128991 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6589f5f48c-dwx8t" Nov 5 04:48:18.129401 kubelet[2811]: E1105 04:48:18.128990 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" Nov 5 04:48:18.129401 kubelet[2811]: E1105 04:48:18.129032 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6589f5f48c-dwx8t_calico-system(c4bb52fc-9b22-49ea-9b48-e96faa9ad94b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6589f5f48c-dwx8t_calico-system(c4bb52fc-9b22-49ea-9b48-e96faa9ad94b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8358b3e78b20d96afc8e2a6997e11e55d0da1fd611a49667e62db35ca72b59f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6589f5f48c-dwx8t" podUID="c4bb52fc-9b22-49ea-9b48-e96faa9ad94b" Nov 5 04:48:18.129487 kubelet[2811]: E1105 04:48:18.129178 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-654c7d6777-9wfvg_calico-system(dc90951e-eb89-47bd-8fb2-3712d2db3fd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-654c7d6777-9wfvg_calico-system(dc90951e-eb89-47bd-8fb2-3712d2db3fd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38524694e6057f70620497986efbb17fa8e36e12caaa9429191fec69f18d277d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:48:18.534283 systemd[1]: Created slice kubepods-besteffort-pod84d99f8c_4e0f_4dac_8f92_d3c8b82ac971.slice - libcontainer container kubepods-besteffort-pod84d99f8c_4e0f_4dac_8f92_d3c8b82ac971.slice. Nov 5 04:48:18.537036 containerd[1641]: time="2025-11-05T04:48:18.537001622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkcnl,Uid:84d99f8c-4e0f-4dac-8f92-d3c8b82ac971,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:18.591283 containerd[1641]: time="2025-11-05T04:48:18.591216792Z" level=error msg="Failed to destroy network for sandbox \"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.593520 containerd[1641]: time="2025-11-05T04:48:18.593486789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkcnl,Uid:84d99f8c-4e0f-4dac-8f92-d3c8b82ac971,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.593797 kubelet[2811]: E1105 04:48:18.593743 2811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:48:18.593865 kubelet[2811]: E1105 04:48:18.593834 2811 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:18.593892 kubelet[2811]: E1105 04:48:18.593864 2811 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dkcnl" Nov 5 04:48:18.593954 kubelet[2811]: E1105 04:48:18.593923 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11ecb3269922feabbe33ea9bc2ad3a07034fd6cd7e1d5d9356928c5b3c058eb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:18.605856 systemd[1]: run-netns-cni\x2d68621d70\x2d1607\x2dced2\x2dbd2a\x2d085b79db4258.mount: Deactivated successfully. Nov 5 04:48:18.605960 systemd[1]: run-netns-cni\x2d202c4e1b\x2dbd82\x2d0802\x2dd687\x2d9b3f8b391915.mount: Deactivated successfully. Nov 5 04:48:18.606061 systemd[1]: run-netns-cni\x2da1cfb02a\x2dbe2a\x2d4b61\x2d7373\x2d7b14fc9e4922.mount: Deactivated successfully. Nov 5 04:48:18.606131 systemd[1]: run-netns-cni\x2d13cefb7e\x2d8335\x2ddb29\x2dfd23\x2d79bedb12050b.mount: Deactivated successfully. Nov 5 04:48:18.606202 systemd[1]: run-netns-cni\x2d214cb4fc\x2d7b8d\x2d8a5d\x2d31c1\x2da2698465c59a.mount: Deactivated successfully. Nov 5 04:48:25.177417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963949656.mount: Deactivated successfully. Nov 5 04:48:27.265063 containerd[1641]: time="2025-11-05T04:48:27.264990511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:27.293458 containerd[1641]: time="2025-11-05T04:48:27.293420620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 5 04:48:27.409492 containerd[1641]: time="2025-11-05T04:48:27.409448910Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:27.567257 containerd[1641]: time="2025-11-05T04:48:27.566457087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:48:27.567541 containerd[1641]: time="2025-11-05T04:48:27.567507119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.816730494s" Nov 5 04:48:27.567601 containerd[1641]: time="2025-11-05T04:48:27.567545481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 04:48:27.611519 containerd[1641]: time="2025-11-05T04:48:27.611456662Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 04:48:27.621320 containerd[1641]: time="2025-11-05T04:48:27.621273488Z" level=info msg="Container bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:27.701210 containerd[1641]: time="2025-11-05T04:48:27.701166650Z" level=info msg="CreateContainer within sandbox \"081b9f7d61269f7a28601a3be10d609bb8a8fd4ed11dd930eab316db6177e8fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f\"" Nov 5 04:48:27.702021 containerd[1641]: time="2025-11-05T04:48:27.701995817Z" level=info msg="StartContainer for \"bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f\"" Nov 5 04:48:27.703989 containerd[1641]: time="2025-11-05T04:48:27.703936602Z" level=info msg="connecting to shim bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f" address="unix:///run/containerd/s/f368fde4b190ef049797450f359a60bf8e0f1c5781a694185517c1b6daa3e4f1" protocol=ttrpc version=3 Nov 5 04:48:27.776120 systemd[1]: Started cri-containerd-bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f.scope - libcontainer container bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f. Nov 5 04:48:27.926899 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 04:48:27.927153 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 04:48:28.044302 containerd[1641]: time="2025-11-05T04:48:28.043963323Z" level=info msg="StartContainer for \"bc8cab8d64eeb10e3f38448208a2c4552a9a7f8d3434e3ad9e9d43ef5707e36f\" returns successfully" Nov 5 04:48:28.070332 kubelet[2811]: E1105 04:48:28.069924 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:28.094651 kubelet[2811]: I1105 04:48:28.093925 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d5252" podStartSLOduration=1.897663834 podStartE2EDuration="21.093907997s" podCreationTimestamp="2025-11-05 04:48:07 +0000 UTC" firstStartedPulling="2025-11-05 04:48:08.385830252 +0000 UTC m=+20.965045466" lastFinishedPulling="2025-11-05 04:48:27.582074415 +0000 UTC m=+40.161289629" observedRunningTime="2025-11-05 04:48:28.093306257 +0000 UTC m=+40.672521471" watchObservedRunningTime="2025-11-05 04:48:28.093907997 +0000 UTC m=+40.673123211" Nov 5 04:48:28.172425 kubelet[2811]: I1105 04:48:28.172349 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz9ww\" (UniqueName: \"kubernetes.io/projected/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-kube-api-access-pz9ww\") pod \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " Nov 5 04:48:28.172425 kubelet[2811]: I1105 04:48:28.172408 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-backend-key-pair\") pod \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " Nov 5 04:48:28.172425 kubelet[2811]: I1105 04:48:28.172430 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-ca-bundle\") pod \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\" (UID: \"c4bb52fc-9b22-49ea-9b48-e96faa9ad94b\") " Nov 5 04:48:28.174347 kubelet[2811]: I1105 04:48:28.174295 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b" (UID: "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 04:48:28.179504 kubelet[2811]: I1105 04:48:28.179438 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b" (UID: "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 04:48:28.179504 kubelet[2811]: I1105 04:48:28.179454 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-kube-api-access-pz9ww" (OuterVolumeSpecName: "kube-api-access-pz9ww") pod "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b" (UID: "c4bb52fc-9b22-49ea-9b48-e96faa9ad94b"). InnerVolumeSpecName "kube-api-access-pz9ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 04:48:28.273002 kubelet[2811]: I1105 04:48:28.272931 2811 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pz9ww\" (UniqueName: \"kubernetes.io/projected/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-kube-api-access-pz9ww\") on node \"localhost\" DevicePath \"\"" Nov 5 04:48:28.273002 kubelet[2811]: I1105 04:48:28.272983 2811 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 04:48:28.273002 kubelet[2811]: I1105 04:48:28.273000 2811 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 04:48:28.594141 systemd[1]: var-lib-kubelet-pods-c4bb52fc\x2d9b22\x2d49ea\x2d9b48\x2de96faa9ad94b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpz9ww.mount: Deactivated successfully. Nov 5 04:48:28.594271 systemd[1]: var-lib-kubelet-pods-c4bb52fc\x2d9b22\x2d49ea\x2d9b48\x2de96faa9ad94b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 04:48:29.071998 kubelet[2811]: E1105 04:48:29.071064 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:29.079473 systemd[1]: Removed slice kubepods-besteffort-podc4bb52fc_9b22_49ea_9b48_e96faa9ad94b.slice - libcontainer container kubepods-besteffort-podc4bb52fc_9b22_49ea_9b48_e96faa9ad94b.slice. Nov 5 04:48:29.137850 systemd[1]: Created slice kubepods-besteffort-pod3f6a71f3_a4bf_43c3_9766_dfdeb3e1d903.slice - libcontainer container kubepods-besteffort-pod3f6a71f3_a4bf_43c3_9766_dfdeb3e1d903.slice. Nov 5 04:48:29.178103 kubelet[2811]: I1105 04:48:29.178059 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903-whisker-backend-key-pair\") pod \"whisker-7cb7b8d459-6mst6\" (UID: \"3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903\") " pod="calico-system/whisker-7cb7b8d459-6mst6" Nov 5 04:48:29.178103 kubelet[2811]: I1105 04:48:29.178101 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhd4\" (UniqueName: \"kubernetes.io/projected/3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903-kube-api-access-mnhd4\") pod \"whisker-7cb7b8d459-6mst6\" (UID: \"3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903\") " pod="calico-system/whisker-7cb7b8d459-6mst6" Nov 5 04:48:29.178302 kubelet[2811]: I1105 04:48:29.178130 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903-whisker-ca-bundle\") pod \"whisker-7cb7b8d459-6mst6\" (UID: \"3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903\") " pod="calico-system/whisker-7cb7b8d459-6mst6" Nov 5 04:48:29.444233 containerd[1641]: time="2025-11-05T04:48:29.444107627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb7b8d459-6mst6,Uid:3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:29.532549 kubelet[2811]: I1105 04:48:29.532469 2811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bb52fc-9b22-49ea-9b48-e96faa9ad94b" path="/var/lib/kubelet/pods/c4bb52fc-9b22-49ea-9b48-e96faa9ad94b/volumes" Nov 5 04:48:29.646748 systemd-networkd[1539]: calic81ac25f9a7: Link UP Nov 5 04:48:29.647079 systemd-networkd[1539]: calic81ac25f9a7: Gained carrier Nov 5 04:48:29.661615 containerd[1641]: 2025-11-05 04:48:29.498 [INFO][4116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:29.661615 containerd[1641]: 2025-11-05 04:48:29.532 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7cb7b8d459--6mst6-eth0 whisker-7cb7b8d459- calico-system 3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903 921 0 2025-11-05 04:48:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cb7b8d459 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7cb7b8d459-6mst6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic81ac25f9a7 [] [] }} ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-" Nov 5 04:48:29.661615 containerd[1641]: 2025-11-05 04:48:29.532 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.661615 containerd[1641]: 2025-11-05 04:48:29.599 [INFO][4131] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" HandleID="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Workload="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.600 [INFO][4131] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" HandleID="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Workload="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036fda0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7cb7b8d459-6mst6", "timestamp":"2025-11-05 04:48:29.599454446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.600 [INFO][4131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.600 [INFO][4131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.600 [INFO][4131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.610 [INFO][4131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" host="localhost" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.615 [INFO][4131] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.620 [INFO][4131] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.622 [INFO][4131] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.624 [INFO][4131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:29.661893 containerd[1641]: 2025-11-05 04:48:29.624 [INFO][4131] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" host="localhost" Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.626 [INFO][4131] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.629 [INFO][4131] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" host="localhost" Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.635 [INFO][4131] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" host="localhost" Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.635 [INFO][4131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" host="localhost" Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.635 [INFO][4131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:29.662165 containerd[1641]: 2025-11-05 04:48:29.635 [INFO][4131] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" HandleID="k8s-pod-network.011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Workload="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.662307 containerd[1641]: 2025-11-05 04:48:29.639 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cb7b8d459--6mst6-eth0", GenerateName:"whisker-7cb7b8d459-", Namespace:"calico-system", SelfLink:"", UID:"3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb7b8d459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7cb7b8d459-6mst6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic81ac25f9a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:29.662307 containerd[1641]: 2025-11-05 04:48:29.639 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.662394 containerd[1641]: 2025-11-05 04:48:29.639 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic81ac25f9a7 ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.662394 containerd[1641]: 2025-11-05 04:48:29.647 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.662436 containerd[1641]: 2025-11-05 04:48:29.647 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7cb7b8d459--6mst6-eth0", GenerateName:"whisker-7cb7b8d459-", Namespace:"calico-system", SelfLink:"", UID:"3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb7b8d459", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a", Pod:"whisker-7cb7b8d459-6mst6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic81ac25f9a7", MAC:"0a:d5:ad:48:13:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:29.662486 containerd[1641]: 2025-11-05 04:48:29.657 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" Namespace="calico-system" Pod="whisker-7cb7b8d459-6mst6" WorkloadEndpoint="localhost-k8s-whisker--7cb7b8d459--6mst6-eth0" Nov 5 04:48:29.746377 containerd[1641]: time="2025-11-05T04:48:29.746148649Z" level=info msg="connecting to shim 011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a" address="unix:///run/containerd/s/4d785304183d300d9219771b726e26e369a00306e245ab09404a8a41b9a9b3ad" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:29.780106 systemd[1]: Started cri-containerd-011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a.scope - libcontainer container 011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a. Nov 5 04:48:29.795835 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:29.830128 containerd[1641]: time="2025-11-05T04:48:29.830071773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb7b8d459-6mst6,Uid:3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903,Namespace:calico-system,Attempt:0,} returns sandbox id \"011226fc8fc42d61f0838ba2af5671c22ff6d28f0cad689352a1ad30dc330e0a\"" Nov 5 04:48:29.831778 containerd[1641]: time="2025-11-05T04:48:29.831740777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:48:30.175306 containerd[1641]: time="2025-11-05T04:48:30.175137293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:30.335613 containerd[1641]: time="2025-11-05T04:48:30.335541145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:48:30.335751 containerd[1641]: time="2025-11-05T04:48:30.335619272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:30.335855 kubelet[2811]: E1105 04:48:30.335808 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:48:30.336200 kubelet[2811]: E1105 04:48:30.335866 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:48:30.336234 kubelet[2811]: E1105 04:48:30.336080 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6706336b96034a33ac145ee4eda65bf3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:30.338013 containerd[1641]: time="2025-11-05T04:48:30.337985795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:48:30.528505 containerd[1641]: time="2025-11-05T04:48:30.528168124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-5rvkg,Uid:63b223a4-aa0e-4b7a-9e9b-ebfedd74f920,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:48:30.529422 containerd[1641]: time="2025-11-05T04:48:30.529220721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-f2jvn,Uid:d81093e5-51dd-4f5e-ba7d-dad72d581a2a,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:48:30.529804 kubelet[2811]: E1105 04:48:30.529758 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:30.530174 containerd[1641]: time="2025-11-05T04:48:30.530086225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt92c,Uid:29ed970c-f253-488e-9357-ef5dd319f30f,Namespace:kube-system,Attempt:0,}" Nov 5 04:48:30.830052 containerd[1641]: time="2025-11-05T04:48:30.829930731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:30.908184 systemd-networkd[1539]: calic81ac25f9a7: Gained IPv6LL Nov 5 04:48:31.476656 containerd[1641]: time="2025-11-05T04:48:31.476591317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:48:31.476656 containerd[1641]: time="2025-11-05T04:48:31.476645268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:31.477346 kubelet[2811]: E1105 04:48:31.477287 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:48:31.477346 kubelet[2811]: E1105 04:48:31.477356 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:48:31.478103 kubelet[2811]: E1105 04:48:31.477514 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:31.478798 kubelet[2811]: E1105 04:48:31.478734 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:48:31.591492 systemd-networkd[1539]: cali15899d9e8af: Link UP Nov 5 04:48:31.591689 systemd-networkd[1539]: cali15899d9e8af: Gained carrier Nov 5 04:48:31.605258 containerd[1641]: 2025-11-05 04:48:31.506 [INFO][4225] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:31.605258 containerd[1641]: 2025-11-05 04:48:31.519 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0 calico-apiserver-7cd54dc478- calico-apiserver d81093e5-51dd-4f5e-ba7d-dad72d581a2a 839 0 2025-11-05 04:48:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd54dc478 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd54dc478-f2jvn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15899d9e8af [] [] }} ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-" Nov 5 04:48:31.605258 containerd[1641]: 2025-11-05 04:48:31.519 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.605258 containerd[1641]: 2025-11-05 04:48:31.553 [INFO][4262] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" HandleID="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Workload="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.553 [INFO][4262] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" HandleID="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Workload="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd54dc478-f2jvn", "timestamp":"2025-11-05 04:48:31.553089349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.553 [INFO][4262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.553 [INFO][4262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.553 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.561 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" host="localhost" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.566 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.571 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.573 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.574 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.605756 containerd[1641]: 2025-11-05 04:48:31.574 [INFO][4262] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" host="localhost" Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.576 [INFO][4262] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6 Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.580 [INFO][4262] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" host="localhost" Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.584 [INFO][4262] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" host="localhost" Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.584 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" host="localhost" Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.585 [INFO][4262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:31.605966 containerd[1641]: 2025-11-05 04:48:31.585 [INFO][4262] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" HandleID="k8s-pod-network.2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Workload="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.606113 containerd[1641]: 2025-11-05 04:48:31.588 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0", GenerateName:"calico-apiserver-7cd54dc478-", Namespace:"calico-apiserver", SelfLink:"", UID:"d81093e5-51dd-4f5e-ba7d-dad72d581a2a", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd54dc478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd54dc478-f2jvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15899d9e8af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.606172 containerd[1641]: 2025-11-05 04:48:31.588 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.606172 containerd[1641]: 2025-11-05 04:48:31.589 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15899d9e8af ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.606172 containerd[1641]: 2025-11-05 04:48:31.591 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.606233 containerd[1641]: 2025-11-05 04:48:31.591 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0", GenerateName:"calico-apiserver-7cd54dc478-", Namespace:"calico-apiserver", SelfLink:"", UID:"d81093e5-51dd-4f5e-ba7d-dad72d581a2a", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd54dc478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6", Pod:"calico-apiserver-7cd54dc478-f2jvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15899d9e8af", MAC:"1e:40:18:b0:00:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.606286 containerd[1641]: 2025-11-05 04:48:31.601 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-f2jvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--f2jvn-eth0" Nov 5 04:48:31.627182 containerd[1641]: time="2025-11-05T04:48:31.627119047Z" level=info msg="connecting to shim 2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6" address="unix:///run/containerd/s/b5aa980c63a0597584135c154551f04438ae45b71aa8e05ae4d832962fe4fbdf" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:31.654124 systemd[1]: Started cri-containerd-2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6.scope - libcontainer container 2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6. Nov 5 04:48:31.668093 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:31.710405 systemd-networkd[1539]: cali06532227b6d: Link UP Nov 5 04:48:31.711147 systemd-networkd[1539]: cali06532227b6d: Gained carrier Nov 5 04:48:31.716991 containerd[1641]: time="2025-11-05T04:48:31.716925213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-f2jvn,Uid:d81093e5-51dd-4f5e-ba7d-dad72d581a2a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ca2edafedd837918c13a51adf6b302abdbd40d509f7b966791d9feec67434f6\"" Nov 5 04:48:31.720072 containerd[1641]: time="2025-11-05T04:48:31.719912651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:48:31.723098 containerd[1641]: 2025-11-05 04:48:31.512 [INFO][4235] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:31.723098 containerd[1641]: 2025-11-05 04:48:31.524 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0 calico-apiserver-7cd54dc478- calico-apiserver 63b223a4-aa0e-4b7a-9e9b-ebfedd74f920 843 0 2025-11-05 04:48:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd54dc478 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd54dc478-5rvkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06532227b6d [] [] }} ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-" Nov 5 04:48:31.723098 containerd[1641]: 2025-11-05 04:48:31.524 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723098 containerd[1641]: 2025-11-05 04:48:31.556 [INFO][4268] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" HandleID="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Workload="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.556 [INFO][4268] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" HandleID="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Workload="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd54dc478-5rvkg", "timestamp":"2025-11-05 04:48:31.556242799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.556 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.585 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.585 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.662 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" host="localhost" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.674 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.679 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.681 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.683 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.723338 containerd[1641]: 2025-11-05 04:48:31.683 [INFO][4268] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" host="localhost" Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.685 [INFO][4268] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4 Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.691 [INFO][4268] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" host="localhost" Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.697 [INFO][4268] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" host="localhost" Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.698 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" host="localhost" Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.698 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:31.723605 containerd[1641]: 2025-11-05 04:48:31.698 [INFO][4268] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" HandleID="k8s-pod-network.5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Workload="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723723 containerd[1641]: 2025-11-05 04:48:31.704 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0", GenerateName:"calico-apiserver-7cd54dc478-", Namespace:"calico-apiserver", SelfLink:"", UID:"63b223a4-aa0e-4b7a-9e9b-ebfedd74f920", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd54dc478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd54dc478-5rvkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06532227b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.723773 containerd[1641]: 2025-11-05 04:48:31.704 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723773 containerd[1641]: 2025-11-05 04:48:31.704 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06532227b6d ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723773 containerd[1641]: 2025-11-05 04:48:31.710 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.723844 containerd[1641]: 2025-11-05 04:48:31.710 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0", GenerateName:"calico-apiserver-7cd54dc478-", Namespace:"calico-apiserver", SelfLink:"", UID:"63b223a4-aa0e-4b7a-9e9b-ebfedd74f920", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd54dc478", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4", Pod:"calico-apiserver-7cd54dc478-5rvkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06532227b6d", MAC:"ae:0f:09:14:bb:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.723890 containerd[1641]: 2025-11-05 04:48:31.718 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" Namespace="calico-apiserver" Pod="calico-apiserver-7cd54dc478-5rvkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd54dc478--5rvkg-eth0" Nov 5 04:48:31.744261 containerd[1641]: time="2025-11-05T04:48:31.744175240Z" level=info msg="connecting to shim 5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4" address="unix:///run/containerd/s/cb07b6552b0bc9663a3d8ad49453bbc262beaf14216ddfa6fdcfc37b414409b4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:31.768286 systemd[1]: Started cri-containerd-5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4.scope - libcontainer container 5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4. Nov 5 04:48:31.784962 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:31.817267 systemd-networkd[1539]: cali79486d8e6fb: Link UP Nov 5 04:48:31.819172 systemd-networkd[1539]: cali79486d8e6fb: Gained carrier Nov 5 04:48:31.845006 containerd[1641]: 2025-11-05 04:48:31.511 [INFO][4218] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:31.845006 containerd[1641]: 2025-11-05 04:48:31.523 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mt92c-eth0 coredns-674b8bbfcf- kube-system 29ed970c-f253-488e-9357-ef5dd319f30f 846 0 2025-11-05 04:47:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mt92c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79486d8e6fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-" Nov 5 04:48:31.845006 containerd[1641]: 2025-11-05 04:48:31.523 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845006 containerd[1641]: 2025-11-05 04:48:31.563 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" HandleID="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Workload="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.564 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" HandleID="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Workload="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mt92c", "timestamp":"2025-11-05 04:48:31.563587828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.564 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.698 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.698 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.762 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" host="localhost" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.776 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.781 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.783 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.786 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:31.845267 containerd[1641]: 2025-11-05 04:48:31.786 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" host="localhost" Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.788 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07 Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.794 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" host="localhost" Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.807 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" host="localhost" Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.807 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" host="localhost" Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.807 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:31.845500 containerd[1641]: 2025-11-05 04:48:31.807 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" HandleID="k8s-pod-network.d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Workload="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845624 containerd[1641]: 2025-11-05 04:48:31.812 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mt92c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29ed970c-f253-488e-9357-ef5dd319f30f", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mt92c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79486d8e6fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.845683 containerd[1641]: 2025-11-05 04:48:31.812 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845683 containerd[1641]: 2025-11-05 04:48:31.812 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79486d8e6fb ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845683 containerd[1641]: 2025-11-05 04:48:31.820 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.845746 containerd[1641]: 2025-11-05 04:48:31.822 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mt92c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29ed970c-f253-488e-9357-ef5dd319f30f", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07", Pod:"coredns-674b8bbfcf-mt92c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79486d8e6fb", MAC:"46:6f:a9:72:8c:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:31.845746 containerd[1641]: 2025-11-05 04:48:31.834 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-mt92c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mt92c-eth0" Nov 5 04:48:31.862884 containerd[1641]: time="2025-11-05T04:48:31.862827129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd54dc478-5rvkg,Uid:63b223a4-aa0e-4b7a-9e9b-ebfedd74f920,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ca8cda6fd5b2a2816d70e244e6758159a1e42416baad5ce3f3e3d7ffae21de4\"" Nov 5 04:48:31.886719 containerd[1641]: time="2025-11-05T04:48:31.885377723Z" level=info msg="connecting to shim d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07" address="unix:///run/containerd/s/a43bdccf01d5b789ff53856304041037a10a35d431d61bc900ef17f2ca2cdcbb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:31.928262 systemd[1]: Started cri-containerd-d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07.scope - libcontainer container d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07. Nov 5 04:48:31.942917 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:31.975094 containerd[1641]: time="2025-11-05T04:48:31.975039187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mt92c,Uid:29ed970c-f253-488e-9357-ef5dd319f30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07\"" Nov 5 04:48:31.975963 kubelet[2811]: E1105 04:48:31.975931 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:31.979923 containerd[1641]: time="2025-11-05T04:48:31.979880306Z" level=info msg="CreateContainer within sandbox \"d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:48:32.000416 containerd[1641]: time="2025-11-05T04:48:32.000289099Z" level=info msg="Container d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:32.006278 containerd[1641]: time="2025-11-05T04:48:32.006223960Z" level=info msg="CreateContainer within sandbox \"d203914c099d46f01be77ae11dc1f5e1d381f14265a059bf32522e93c86c2f07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81\"" Nov 5 04:48:32.006904 containerd[1641]: time="2025-11-05T04:48:32.006870744Z" level=info msg="StartContainer for \"d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81\"" Nov 5 04:48:32.007905 containerd[1641]: time="2025-11-05T04:48:32.007883566Z" level=info msg="connecting to shim d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81" address="unix:///run/containerd/s/a43bdccf01d5b789ff53856304041037a10a35d431d61bc900ef17f2ca2cdcbb" protocol=ttrpc version=3 Nov 5 04:48:32.033202 systemd[1]: Started cri-containerd-d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81.scope - libcontainer container d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81. Nov 5 04:48:32.083013 containerd[1641]: time="2025-11-05T04:48:32.082953046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:32.102783 containerd[1641]: time="2025-11-05T04:48:32.102736665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:48:32.103032 containerd[1641]: time="2025-11-05T04:48:32.102812427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:32.104137 containerd[1641]: time="2025-11-05T04:48:32.103611908Z" level=info msg="StartContainer for \"d85f05341b07e4a20d9d0ac31f4c8f66b06ac41340071757404741d452015f81\" returns successfully" Nov 5 04:48:32.104137 containerd[1641]: time="2025-11-05T04:48:32.103885892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:48:32.104304 kubelet[2811]: E1105 04:48:32.103608 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:32.104304 kubelet[2811]: E1105 04:48:32.103673 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:32.104304 kubelet[2811]: E1105 04:48:32.103934 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvkcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-f2jvn_calico-apiserver(d81093e5-51dd-4f5e-ba7d-dad72d581a2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:32.105391 kubelet[2811]: E1105 04:48:32.105350 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:48:32.108429 kubelet[2811]: E1105 04:48:32.108377 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:48:32.109137 kubelet[2811]: E1105 04:48:32.109083 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:48:32.521849 containerd[1641]: time="2025-11-05T04:48:32.521796536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:32.523055 containerd[1641]: time="2025-11-05T04:48:32.523014122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:48:32.523155 containerd[1641]: time="2025-11-05T04:48:32.523028669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:32.523295 kubelet[2811]: E1105 04:48:32.523241 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:32.523652 kubelet[2811]: E1105 04:48:32.523304 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:32.523652 kubelet[2811]: E1105 04:48:32.523458 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk4n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-5rvkg_calico-apiserver(63b223a4-aa0e-4b7a-9e9b-ebfedd74f920): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:32.524689 kubelet[2811]: E1105 04:48:32.524636 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:48:32.528060 containerd[1641]: time="2025-11-05T04:48:32.528006323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkcnl,Uid:84d99f8c-4e0f-4dac-8f92-d3c8b82ac971,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:32.528317 containerd[1641]: time="2025-11-05T04:48:32.528267032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ms88g,Uid:c8a4b2e2-09ee-4112-a2df-31acb1eaedf9,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:32.643518 systemd-networkd[1539]: cali6292d77ac19: Link UP Nov 5 04:48:32.644745 systemd-networkd[1539]: cali6292d77ac19: Gained carrier Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.563 [INFO][4517] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.577 [INFO][4517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ms88g-eth0 goldmane-666569f655- calico-system c8a4b2e2-09ee-4112-a2df-31acb1eaedf9 840 0 2025-11-05 04:48:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ms88g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6292d77ac19 [] [] }} ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.577 [INFO][4517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4533] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" HandleID="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Workload="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4533] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" HandleID="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Workload="localhost-k8s-goldmane--666569f655--ms88g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ms88g", "timestamp":"2025-11-05 04:48:32.606311177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4533] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4533] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.613 [INFO][4533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.617 [INFO][4533] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.621 [INFO][4533] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.623 [INFO][4533] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.625 [INFO][4533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.625 [INFO][4533] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.626 [INFO][4533] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.630 [INFO][4533] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.636 [INFO][4533] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.636 [INFO][4533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" host="localhost" Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.637 [INFO][4533] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:32.659640 containerd[1641]: 2025-11-05 04:48:32.637 [INFO][4533] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" HandleID="k8s-pod-network.3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Workload="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.640 [INFO][4517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ms88g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ms88g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6292d77ac19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.640 [INFO][4517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.640 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6292d77ac19 ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.644 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.645 [INFO][4517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ms88g-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c8a4b2e2-09ee-4112-a2df-31acb1eaedf9", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d", Pod:"goldmane-666569f655-ms88g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6292d77ac19", MAC:"0e:e5:68:d1:b2:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:32.660613 containerd[1641]: 2025-11-05 04:48:32.655 [INFO][4517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" Namespace="calico-system" Pod="goldmane-666569f655-ms88g" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ms88g-eth0" Nov 5 04:48:32.684720 containerd[1641]: time="2025-11-05T04:48:32.684656367Z" level=info msg="connecting to shim 3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d" address="unix:///run/containerd/s/9a54a5115855b72d9f1c131e52263f330bb001445008bb0bb40cd98d62f46490" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:32.710106 systemd[1]: Started cri-containerd-3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d.scope - libcontainer container 3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d. Nov 5 04:48:32.723770 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:32.764007 containerd[1641]: time="2025-11-05T04:48:32.763879715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ms88g,Uid:c8a4b2e2-09ee-4112-a2df-31acb1eaedf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3eaab4119bbfe5722a71e5f70024f0ac56eb287c427e8096fdbd1ef80789770d\"" Nov 5 04:48:32.765657 containerd[1641]: time="2025-11-05T04:48:32.765633618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:48:32.767508 systemd-networkd[1539]: cali8373830dc09: Link UP Nov 5 04:48:32.770006 systemd-networkd[1539]: cali8373830dc09: Gained carrier Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.565 [INFO][4506] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.578 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dkcnl-eth0 csi-node-driver- calico-system 84d99f8c-4e0f-4dac-8f92-d3c8b82ac971 732 0 2025-11-05 04:48:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dkcnl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8373830dc09 [] [] }} ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.578 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4535] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" HandleID="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Workload="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4535] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" HandleID="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Workload="localhost-k8s-csi--node--driver--dkcnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dkcnl", "timestamp":"2025-11-05 04:48:32.606469023 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.606 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.637 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.637 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.731 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.738 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.743 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.745 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.748 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.748 [INFO][4535] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.750 [INFO][4535] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8 Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.754 [INFO][4535] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.760 [INFO][4535] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.760 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" host="localhost" Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.760 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:32.784654 containerd[1641]: 2025-11-05 04:48:32.760 [INFO][4535] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" HandleID="k8s-pod-network.db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Workload="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.764 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dkcnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dkcnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8373830dc09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.764 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.764 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8373830dc09 ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.770 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.770 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dkcnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84d99f8c-4e0f-4dac-8f92-d3c8b82ac971", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8", Pod:"csi-node-driver-dkcnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8373830dc09", MAC:"7a:ed:00:b1:1c:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:32.785668 containerd[1641]: 2025-11-05 04:48:32.780 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" Namespace="calico-system" Pod="csi-node-driver-dkcnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkcnl-eth0" Nov 5 04:48:32.807908 containerd[1641]: time="2025-11-05T04:48:32.807806966Z" level=info msg="connecting to shim db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8" address="unix:///run/containerd/s/61edfa42f951164e7edfae7597af65267ddb3a4622c3a1f2d2053ebc50bfa66d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:32.831149 systemd[1]: Started cri-containerd-db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8.scope - libcontainer container db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8. Nov 5 04:48:32.844080 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:32.860627 containerd[1641]: time="2025-11-05T04:48:32.860571194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkcnl,Uid:84d99f8c-4e0f-4dac-8f92-d3c8b82ac971,Namespace:calico-system,Attempt:0,} returns sandbox id \"db81e553ca156c04c1637278f72f73976c4cbf99e5aa886ac170a1905f367da8\"" Nov 5 04:48:32.892192 systemd-networkd[1539]: cali06532227b6d: Gained IPv6LL Nov 5 04:48:32.956172 systemd-networkd[1539]: cali15899d9e8af: Gained IPv6LL Nov 5 04:48:33.112682 kubelet[2811]: E1105 04:48:33.112535 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:48:33.112886 kubelet[2811]: E1105 04:48:33.112695 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:33.113111 kubelet[2811]: E1105 04:48:33.113065 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:48:33.136545 kubelet[2811]: I1105 04:48:33.136474 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mt92c" podStartSLOduration=39.136458216 podStartE2EDuration="39.136458216s" podCreationTimestamp="2025-11-05 04:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:48:33.13540011 +0000 UTC m=+45.714615324" watchObservedRunningTime="2025-11-05 04:48:33.136458216 +0000 UTC m=+45.715673430" Nov 5 04:48:33.300598 containerd[1641]: time="2025-11-05T04:48:33.300528781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:33.302195 containerd[1641]: time="2025-11-05T04:48:33.302139474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:48:33.302282 containerd[1641]: time="2025-11-05T04:48:33.302150135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:33.302570 kubelet[2811]: E1105 04:48:33.302492 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:48:33.302624 kubelet[2811]: E1105 04:48:33.302587 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:48:33.302966 kubelet[2811]: E1105 04:48:33.302891 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhdgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ms88g_calico-system(c8a4b2e2-09ee-4112-a2df-31acb1eaedf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:33.303209 containerd[1641]: time="2025-11-05T04:48:33.303166372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:48:33.304275 kubelet[2811]: E1105 04:48:33.304238 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:48:33.404142 systemd-networkd[1539]: cali79486d8e6fb: Gained IPv6LL Nov 5 04:48:33.527647 kubelet[2811]: E1105 04:48:33.527593 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:33.528426 containerd[1641]: time="2025-11-05T04:48:33.528358959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cbn9v,Uid:24240dbc-96de-4497-be68-42f6bff10c1e,Namespace:kube-system,Attempt:0,}" Nov 5 04:48:33.528605 containerd[1641]: time="2025-11-05T04:48:33.528375149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654c7d6777-9wfvg,Uid:dc90951e-eb89-47bd-8fb2-3712d2db3fd5,Namespace:calico-system,Attempt:0,}" Nov 5 04:48:33.626145 containerd[1641]: time="2025-11-05T04:48:33.626065363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:33.627197 containerd[1641]: time="2025-11-05T04:48:33.627152123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:48:33.627318 containerd[1641]: time="2025-11-05T04:48:33.627254284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:33.627487 kubelet[2811]: E1105 04:48:33.627439 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:48:33.627558 kubelet[2811]: E1105 04:48:33.627502 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:48:33.627696 kubelet[2811]: E1105 04:48:33.627646 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:33.630008 containerd[1641]: time="2025-11-05T04:48:33.629953761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:48:33.641059 systemd-networkd[1539]: calied41d09d3c3: Link UP Nov 5 04:48:33.642146 systemd-networkd[1539]: calied41d09d3c3: Gained carrier Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.558 [INFO][4682] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.575 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0 coredns-674b8bbfcf- kube-system 24240dbc-96de-4497-be68-42f6bff10c1e 838 0 2025-11-05 04:47:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cbn9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied41d09d3c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.575 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.602 [INFO][4713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" HandleID="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Workload="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.602 [INFO][4713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" HandleID="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Workload="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000597950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cbn9v", "timestamp":"2025-11-05 04:48:33.602109846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.602 [INFO][4713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.602 [INFO][4713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.602 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.608 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.613 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.617 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.620 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.622 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.622 [INFO][4713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.623 [INFO][4713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00 Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.629 [INFO][4713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.634 [INFO][4713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.634 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" host="localhost" Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.634 [INFO][4713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:33.655616 containerd[1641]: 2025-11-05 04:48:33.634 [INFO][4713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" HandleID="k8s-pod-network.54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Workload="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.637 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24240dbc-96de-4497-be68-42f6bff10c1e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cbn9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied41d09d3c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.637 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.637 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied41d09d3c3 ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.642 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.643 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24240dbc-96de-4497-be68-42f6bff10c1e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 47, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00", Pod:"coredns-674b8bbfcf-cbn9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied41d09d3c3", MAC:"e6:a1:ea:e9:2a:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:33.656260 containerd[1641]: 2025-11-05 04:48:33.652 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" Namespace="kube-system" Pod="coredns-674b8bbfcf-cbn9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cbn9v-eth0" Nov 5 04:48:33.676953 containerd[1641]: time="2025-11-05T04:48:33.676896657Z" level=info msg="connecting to shim 54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00" address="unix:///run/containerd/s/b87b3c68ee66b9d70d012bd5643c14b4d8958116db862ca4b6f6f080f4cc54c0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:33.709123 systemd[1]: Started cri-containerd-54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00.scope - libcontainer container 54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00. Nov 5 04:48:33.724890 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:33.747116 systemd-networkd[1539]: cali2ebc2cea05c: Link UP Nov 5 04:48:33.747689 systemd-networkd[1539]: cali2ebc2cea05c: Gained carrier Nov 5 04:48:33.771172 containerd[1641]: time="2025-11-05T04:48:33.770956225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cbn9v,Uid:24240dbc-96de-4497-be68-42f6bff10c1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00\"" Nov 5 04:48:33.772114 kubelet[2811]: E1105 04:48:33.772089 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.561 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.575 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0 calico-kube-controllers-654c7d6777- calico-system dc90951e-eb89-47bd-8fb2-3712d2db3fd5 834 0 2025-11-05 04:48:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:654c7d6777 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-654c7d6777-9wfvg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2ebc2cea05c [] [] }} ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.575 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.605 [INFO][4711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" HandleID="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Workload="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.605 [INFO][4711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" HandleID="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Workload="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4c70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-654c7d6777-9wfvg", "timestamp":"2025-11-05 04:48:33.605637547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.605 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.635 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.635 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.710 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.717 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.723 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.726 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.728 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.728 [INFO][4711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.730 [INFO][4711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95 Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.734 [INFO][4711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.741 [INFO][4711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.741 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" host="localhost" Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.741 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:48:33.773188 containerd[1641]: 2025-11-05 04:48:33.741 [INFO][4711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" HandleID="k8s-pod-network.9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Workload="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.744 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0", GenerateName:"calico-kube-controllers-654c7d6777-", Namespace:"calico-system", SelfLink:"", UID:"dc90951e-eb89-47bd-8fb2-3712d2db3fd5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654c7d6777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-654c7d6777-9wfvg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ebc2cea05c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.744 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.744 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ebc2cea05c ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.747 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.747 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0", GenerateName:"calico-kube-controllers-654c7d6777-", Namespace:"calico-system", SelfLink:"", UID:"dc90951e-eb89-47bd-8fb2-3712d2db3fd5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"654c7d6777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95", Pod:"calico-kube-controllers-654c7d6777-9wfvg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ebc2cea05c", MAC:"72:9b:73:c4:43:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:48:33.774119 containerd[1641]: 2025-11-05 04:48:33.766 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" Namespace="calico-system" Pod="calico-kube-controllers-654c7d6777-9wfvg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--654c7d6777--9wfvg-eth0" Nov 5 04:48:33.777104 containerd[1641]: time="2025-11-05T04:48:33.777075532Z" level=info msg="CreateContainer within sandbox \"54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:48:33.787721 containerd[1641]: time="2025-11-05T04:48:33.787662855Z" level=info msg="Container b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:48:33.796191 containerd[1641]: time="2025-11-05T04:48:33.796160316Z" level=info msg="CreateContainer within sandbox \"54c48526f8baf2258283e7b95cc6bc30d5da13d263d2be8a5bea882c1ad60a00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be\"" Nov 5 04:48:33.796985 containerd[1641]: time="2025-11-05T04:48:33.796924660Z" level=info msg="StartContainer for \"b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be\"" Nov 5 04:48:33.798360 containerd[1641]: time="2025-11-05T04:48:33.798313147Z" level=info msg="connecting to shim b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be" address="unix:///run/containerd/s/b87b3c68ee66b9d70d012bd5643c14b4d8958116db862ca4b6f6f080f4cc54c0" protocol=ttrpc version=3 Nov 5 04:48:33.803195 containerd[1641]: time="2025-11-05T04:48:33.803133345Z" level=info msg="connecting to shim 9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95" address="unix:///run/containerd/s/47a2035bda21c620c26756a3ecad0a09385749a805c891d96f00aa47a344ef45" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:48:33.819198 systemd[1]: Started cri-containerd-b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be.scope - libcontainer container b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be. Nov 5 04:48:33.838167 systemd[1]: Started cri-containerd-9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95.scope - libcontainer container 9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95. Nov 5 04:48:33.855653 systemd-resolved[1298]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:48:33.872775 containerd[1641]: time="2025-11-05T04:48:33.872716439Z" level=info msg="StartContainer for \"b8f695cd8949e46dd10ee0c95bd37f5f355487df9d6f190b55c49ef99281e3be\" returns successfully" Nov 5 04:48:33.901987 containerd[1641]: time="2025-11-05T04:48:33.901904807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-654c7d6777-9wfvg,Uid:dc90951e-eb89-47bd-8fb2-3712d2db3fd5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9674f43754113be3c953a6842071e1ef265e766462ac97807a97bccbb81c5f95\"" Nov 5 04:48:34.023499 containerd[1641]: time="2025-11-05T04:48:34.023177384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:34.025235 containerd[1641]: time="2025-11-05T04:48:34.025136773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:48:34.025551 containerd[1641]: time="2025-11-05T04:48:34.025168632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:34.025862 kubelet[2811]: E1105 04:48:34.025623 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:48:34.025862 kubelet[2811]: E1105 04:48:34.025692 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:48:34.026050 kubelet[2811]: E1105 04:48:34.025946 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:34.027549 kubelet[2811]: E1105 04:48:34.027360 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:34.027766 containerd[1641]: time="2025-11-05T04:48:34.027644550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:48:34.119575 kubelet[2811]: E1105 04:48:34.119408 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:34.120548 kubelet[2811]: E1105 04:48:34.120362 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:34.121491 kubelet[2811]: E1105 04:48:34.121461 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:48:34.122106 kubelet[2811]: E1105 04:48:34.122075 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:34.130746 kubelet[2811]: I1105 04:48:34.130646 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cbn9v" podStartSLOduration=40.130626765 podStartE2EDuration="40.130626765s" podCreationTimestamp="2025-11-05 04:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:48:34.130430968 +0000 UTC m=+46.709646182" watchObservedRunningTime="2025-11-05 04:48:34.130626765 +0000 UTC m=+46.709841979" Nov 5 04:48:34.307647 kubelet[2811]: I1105 04:48:34.307494 2811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 04:48:34.308253 kubelet[2811]: E1105 04:48:34.308216 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:34.356004 containerd[1641]: time="2025-11-05T04:48:34.355138869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:34.358151 containerd[1641]: time="2025-11-05T04:48:34.358106299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:48:34.358260 containerd[1641]: time="2025-11-05T04:48:34.358185047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:34.360134 kubelet[2811]: E1105 04:48:34.360071 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:48:34.360212 kubelet[2811]: E1105 04:48:34.360156 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:48:34.360670 kubelet[2811]: E1105 04:48:34.360617 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84pmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654c7d6777-9wfvg_calico-system(dc90951e-eb89-47bd-8fb2-3712d2db3fd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:34.361953 kubelet[2811]: E1105 04:48:34.361897 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:48:34.364262 systemd-networkd[1539]: cali8373830dc09: Gained IPv6LL Nov 5 04:48:34.493353 systemd-networkd[1539]: cali6292d77ac19: Gained IPv6LL Nov 5 04:48:34.814064 systemd-networkd[1539]: cali2ebc2cea05c: Gained IPv6LL Nov 5 04:48:34.815087 systemd-networkd[1539]: vxlan.calico: Link UP Nov 5 04:48:34.815090 systemd-networkd[1539]: vxlan.calico: Gained carrier Nov 5 04:48:35.123091 kubelet[2811]: E1105 04:48:35.122485 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:35.127070 kubelet[2811]: E1105 04:48:35.127002 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:35.128019 kubelet[2811]: E1105 04:48:35.127595 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:48:35.128498 kubelet[2811]: E1105 04:48:35.128388 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:35.197332 systemd-networkd[1539]: calied41d09d3c3: Gained IPv6LL Nov 5 04:48:36.124678 kubelet[2811]: E1105 04:48:36.124628 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:36.124678 kubelet[2811]: E1105 04:48:36.124693 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:48:36.284266 systemd-networkd[1539]: vxlan.calico: Gained IPv6LL Nov 5 04:48:39.276082 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). Nov 5 04:48:39.363842 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:39.366330 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:39.372261 systemd-logind[1609]: New session 10 of user core. Nov 5 04:48:39.380251 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 04:48:39.489768 sshd[5046]: Connection closed by 10.0.0.1 port 48104 Nov 5 04:48:39.490112 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:39.495052 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:48104.service: Deactivated successfully. Nov 5 04:48:39.497278 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 04:48:39.498247 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Nov 5 04:48:39.499590 systemd-logind[1609]: Removed session 10. Nov 5 04:48:44.505938 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:39652.service - OpenSSH per-connection server daemon (10.0.0.1:39652). Nov 5 04:48:44.529744 containerd[1641]: time="2025-11-05T04:48:44.529459997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:48:44.576487 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 39652 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:44.578516 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:44.583634 systemd-logind[1609]: New session 11 of user core. Nov 5 04:48:44.591273 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 04:48:44.763137 sshd[5072]: Connection closed by 10.0.0.1 port 39652 Nov 5 04:48:44.765507 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:44.770486 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:39652.service: Deactivated successfully. Nov 5 04:48:44.772822 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 04:48:44.774205 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Nov 5 04:48:44.775784 systemd-logind[1609]: Removed session 11. Nov 5 04:48:44.915236 containerd[1641]: time="2025-11-05T04:48:44.915154441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:44.920325 containerd[1641]: time="2025-11-05T04:48:44.920191658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:48:44.920325 containerd[1641]: time="2025-11-05T04:48:44.920290795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:44.922373 kubelet[2811]: E1105 04:48:44.921583 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:44.922373 kubelet[2811]: E1105 04:48:44.921646 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:44.922373 kubelet[2811]: E1105 04:48:44.921899 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk4n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-5rvkg_calico-apiserver(63b223a4-aa0e-4b7a-9e9b-ebfedd74f920): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:44.923990 containerd[1641]: time="2025-11-05T04:48:44.922923112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:48:44.924059 kubelet[2811]: E1105 04:48:44.923121 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:48:45.262154 containerd[1641]: time="2025-11-05T04:48:45.262069272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:45.292303 containerd[1641]: time="2025-11-05T04:48:45.292209808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:45.292452 containerd[1641]: time="2025-11-05T04:48:45.292287639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:48:45.292711 kubelet[2811]: E1105 04:48:45.292645 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:48:45.292711 kubelet[2811]: E1105 04:48:45.292710 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:48:45.292915 kubelet[2811]: E1105 04:48:45.292859 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6706336b96034a33ac145ee4eda65bf3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:45.295459 containerd[1641]: time="2025-11-05T04:48:45.295422580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:48:45.653295 containerd[1641]: time="2025-11-05T04:48:45.653141410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:45.654277 containerd[1641]: time="2025-11-05T04:48:45.654239220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:48:45.654397 containerd[1641]: time="2025-11-05T04:48:45.654336869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:45.654549 kubelet[2811]: E1105 04:48:45.654495 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:48:45.654600 kubelet[2811]: E1105 04:48:45.654562 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:48:45.654781 kubelet[2811]: E1105 04:48:45.654722 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:45.656004 kubelet[2811]: E1105 04:48:45.655941 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:48:46.528451 containerd[1641]: time="2025-11-05T04:48:46.528403028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:48:46.967600 containerd[1641]: time="2025-11-05T04:48:46.967422206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:46.968919 containerd[1641]: time="2025-11-05T04:48:46.968878828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:48:46.969015 containerd[1641]: time="2025-11-05T04:48:46.968947692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:46.969278 kubelet[2811]: E1105 04:48:46.969209 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:46.969627 kubelet[2811]: E1105 04:48:46.969288 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:48:46.969627 kubelet[2811]: E1105 04:48:46.969458 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvkcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-f2jvn_calico-apiserver(d81093e5-51dd-4f5e-ba7d-dad72d581a2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:46.970706 kubelet[2811]: E1105 04:48:46.970672 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:48:47.530019 containerd[1641]: time="2025-11-05T04:48:47.529894747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:48:47.874608 containerd[1641]: time="2025-11-05T04:48:47.874441253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:47.875822 containerd[1641]: time="2025-11-05T04:48:47.875706323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:48:47.875890 containerd[1641]: time="2025-11-05T04:48:47.875819292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:47.876050 kubelet[2811]: E1105 04:48:47.875963 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:48:47.876124 kubelet[2811]: E1105 04:48:47.876058 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:48:47.876732 containerd[1641]: time="2025-11-05T04:48:47.876406589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:48:47.876803 kubelet[2811]: E1105 04:48:47.876444 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhdgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ms88g_calico-system(c8a4b2e2-09ee-4112-a2df-31acb1eaedf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:47.877891 kubelet[2811]: E1105 04:48:47.877859 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:48:48.206784 containerd[1641]: time="2025-11-05T04:48:48.206632842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:48.254513 containerd[1641]: time="2025-11-05T04:48:48.254434697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:48:48.254680 containerd[1641]: time="2025-11-05T04:48:48.254481077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:48.254799 kubelet[2811]: E1105 04:48:48.254745 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:48:48.255158 kubelet[2811]: E1105 04:48:48.254809 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:48:48.255158 kubelet[2811]: E1105 04:48:48.255104 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84pmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654c7d6777-9wfvg_calico-system(dc90951e-eb89-47bd-8fb2-3712d2db3fd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:48.255688 containerd[1641]: time="2025-11-05T04:48:48.255641222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:48:48.256462 kubelet[2811]: E1105 04:48:48.256425 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:48:48.550596 containerd[1641]: time="2025-11-05T04:48:48.550547987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:48.551773 containerd[1641]: time="2025-11-05T04:48:48.551738851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:48:48.551841 containerd[1641]: time="2025-11-05T04:48:48.551806703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:48.552039 kubelet[2811]: E1105 04:48:48.552002 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:48:48.552130 kubelet[2811]: E1105 04:48:48.552054 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:48:48.552236 kubelet[2811]: E1105 04:48:48.552184 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:48.554027 containerd[1641]: time="2025-11-05T04:48:48.553996629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:48:48.877126 containerd[1641]: time="2025-11-05T04:48:48.876947077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:48:48.878202 containerd[1641]: time="2025-11-05T04:48:48.878150945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:48:48.878294 containerd[1641]: time="2025-11-05T04:48:48.878248554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:48:48.878454 kubelet[2811]: E1105 04:48:48.878383 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:48:48.878454 kubelet[2811]: E1105 04:48:48.878445 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:48:48.878655 kubelet[2811]: E1105 04:48:48.878600 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:48:48.879833 kubelet[2811]: E1105 04:48:48.879792 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:48:49.781286 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Nov 5 04:48:49.839427 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:49.841381 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:49.845985 systemd-logind[1609]: New session 12 of user core. Nov 5 04:48:49.853209 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 04:48:49.931410 sshd[5101]: Connection closed by 10.0.0.1 port 39656 Nov 5 04:48:49.931762 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:49.936840 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:39656.service: Deactivated successfully. Nov 5 04:48:49.939150 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 04:48:49.940102 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Nov 5 04:48:49.941716 systemd-logind[1609]: Removed session 12. Nov 5 04:48:54.949223 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:37610.service - OpenSSH per-connection server daemon (10.0.0.1:37610). Nov 5 04:48:54.998544 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:54.999845 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:55.004394 systemd-logind[1609]: New session 13 of user core. Nov 5 04:48:55.012125 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 04:48:55.082413 sshd[5119]: Connection closed by 10.0.0.1 port 37610 Nov 5 04:48:55.082742 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:55.096552 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:37610.service: Deactivated successfully. Nov 5 04:48:55.098371 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 04:48:55.099147 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Nov 5 04:48:55.101732 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:37622.service - OpenSSH per-connection server daemon (10.0.0.1:37622). Nov 5 04:48:55.102351 systemd-logind[1609]: Removed session 13. Nov 5 04:48:55.150746 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:55.152273 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:55.156569 systemd-logind[1609]: New session 14 of user core. Nov 5 04:48:55.167194 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 04:48:55.269770 sshd[5136]: Connection closed by 10.0.0.1 port 37622 Nov 5 04:48:55.270211 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:55.286265 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:37622.service: Deactivated successfully. Nov 5 04:48:55.289852 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 04:48:55.292082 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Nov 5 04:48:55.296161 systemd-logind[1609]: Removed session 14. Nov 5 04:48:55.298459 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:37630.service - OpenSSH per-connection server daemon (10.0.0.1:37630). Nov 5 04:48:55.351944 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 37630 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:48:55.353305 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:48:55.357898 systemd-logind[1609]: New session 15 of user core. Nov 5 04:48:55.366268 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 04:48:55.436584 sshd[5158]: Connection closed by 10.0.0.1 port 37630 Nov 5 04:48:55.436889 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Nov 5 04:48:55.441334 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:37630.service: Deactivated successfully. Nov 5 04:48:55.443749 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 04:48:55.444547 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Nov 5 04:48:55.446001 systemd-logind[1609]: Removed session 15. Nov 5 04:48:57.528993 kubelet[2811]: E1105 04:48:57.528894 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:48:59.530904 kubelet[2811]: E1105 04:48:59.530818 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:49:00.451732 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:39422.service - OpenSSH per-connection server daemon (10.0.0.1:39422). Nov 5 04:49:00.494632 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 39422 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:00.496433 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:00.501517 systemd-logind[1609]: New session 16 of user core. Nov 5 04:49:00.511154 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 04:49:00.527818 kubelet[2811]: E1105 04:49:00.527757 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:00.528139 kubelet[2811]: E1105 04:49:00.528018 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:49:00.585952 sshd[5207]: Connection closed by 10.0.0.1 port 39422 Nov 5 04:49:00.586288 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:00.591100 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:39422.service: Deactivated successfully. Nov 5 04:49:00.593307 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 04:49:00.594380 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Nov 5 04:49:00.595524 systemd-logind[1609]: Removed session 16. Nov 5 04:49:01.528096 kubelet[2811]: E1105 04:49:01.528016 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:01.528877 kubelet[2811]: E1105 04:49:01.528642 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:49:02.529393 kubelet[2811]: E1105 04:49:02.529304 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:49:03.528766 kubelet[2811]: E1105 04:49:03.528416 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:49:05.611027 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:39438.service - OpenSSH per-connection server daemon (10.0.0.1:39438). Nov 5 04:49:05.652219 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 39438 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:05.653541 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:05.657694 systemd-logind[1609]: New session 17 of user core. Nov 5 04:49:05.664101 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 04:49:05.732724 sshd[5229]: Connection closed by 10.0.0.1 port 39438 Nov 5 04:49:05.733050 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:05.738282 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:39438.service: Deactivated successfully. Nov 5 04:49:05.740495 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 04:49:05.741278 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Nov 5 04:49:05.742727 systemd-logind[1609]: Removed session 17. Nov 5 04:49:09.528638 kubelet[2811]: E1105 04:49:09.528495 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:09.530636 containerd[1641]: time="2025-11-05T04:49:09.530581689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:49:09.871607 containerd[1641]: time="2025-11-05T04:49:09.871438916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:09.890646 containerd[1641]: time="2025-11-05T04:49:09.890587334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:09.890799 containerd[1641]: time="2025-11-05T04:49:09.890663740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:49:09.890944 kubelet[2811]: E1105 04:49:09.890899 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:49:09.891030 kubelet[2811]: E1105 04:49:09.890949 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:49:09.891146 kubelet[2811]: E1105 04:49:09.891107 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk4n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-5rvkg_calico-apiserver(63b223a4-aa0e-4b7a-9e9b-ebfedd74f920): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:09.892309 kubelet[2811]: E1105 04:49:09.892263 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:49:10.746112 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Nov 5 04:49:10.824239 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:10.826331 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:10.830998 systemd-logind[1609]: New session 18 of user core. Nov 5 04:49:10.836120 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 04:49:10.929676 sshd[5245]: Connection closed by 10.0.0.1 port 56002 Nov 5 04:49:10.930052 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:10.935413 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:56002.service: Deactivated successfully. Nov 5 04:49:10.937743 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 04:49:10.938564 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Nov 5 04:49:10.939863 systemd-logind[1609]: Removed session 18. Nov 5 04:49:11.528992 containerd[1641]: time="2025-11-05T04:49:11.528638150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:49:11.902838 containerd[1641]: time="2025-11-05T04:49:11.902652116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:11.903952 containerd[1641]: time="2025-11-05T04:49:11.903911108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:49:11.904066 containerd[1641]: time="2025-11-05T04:49:11.904023592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:11.904270 kubelet[2811]: E1105 04:49:11.904213 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:49:11.904702 kubelet[2811]: E1105 04:49:11.904275 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:49:11.904702 kubelet[2811]: E1105 04:49:11.904451 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvkcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cd54dc478-f2jvn_calico-apiserver(d81093e5-51dd-4f5e-ba7d-dad72d581a2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:11.905685 kubelet[2811]: E1105 04:49:11.905629 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:49:14.529594 containerd[1641]: time="2025-11-05T04:49:14.529504837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:49:14.851289 containerd[1641]: time="2025-11-05T04:49:14.851153918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:14.852565 containerd[1641]: time="2025-11-05T04:49:14.852511575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:49:14.852660 containerd[1641]: time="2025-11-05T04:49:14.852547654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:14.853018 kubelet[2811]: E1105 04:49:14.852956 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:49:14.853462 kubelet[2811]: E1105 04:49:14.853035 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:49:14.853462 kubelet[2811]: E1105 04:49:14.853332 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6706336b96034a33ac145ee4eda65bf3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:14.853575 containerd[1641]: time="2025-11-05T04:49:14.853352347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:49:15.204075 containerd[1641]: time="2025-11-05T04:49:15.203927378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:15.205307 containerd[1641]: time="2025-11-05T04:49:15.205229648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:49:15.205396 containerd[1641]: time="2025-11-05T04:49:15.205308248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:15.205555 kubelet[2811]: E1105 04:49:15.205506 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:49:15.205648 kubelet[2811]: E1105 04:49:15.205557 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:49:15.205858 kubelet[2811]: E1105 04:49:15.205806 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84pmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-654c7d6777-9wfvg_calico-system(dc90951e-eb89-47bd-8fb2-3712d2db3fd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:15.206039 containerd[1641]: time="2025-11-05T04:49:15.205859367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:49:15.207317 kubelet[2811]: E1105 04:49:15.207269 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:49:15.536926 containerd[1641]: time="2025-11-05T04:49:15.536881943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:15.547710 containerd[1641]: time="2025-11-05T04:49:15.547677464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:49:15.547764 containerd[1641]: time="2025-11-05T04:49:15.547704967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:15.547848 kubelet[2811]: E1105 04:49:15.547808 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:49:15.547896 kubelet[2811]: E1105 04:49:15.547849 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:49:15.548090 kubelet[2811]: E1105 04:49:15.548052 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:15.548274 containerd[1641]: time="2025-11-05T04:49:15.548220238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:49:15.879685 containerd[1641]: time="2025-11-05T04:49:15.879537566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:15.881037 containerd[1641]: time="2025-11-05T04:49:15.880992988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:49:15.881093 containerd[1641]: time="2025-11-05T04:49:15.881076607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:15.881274 kubelet[2811]: E1105 04:49:15.881237 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:49:15.881635 kubelet[2811]: E1105 04:49:15.881290 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:49:15.881635 kubelet[2811]: E1105 04:49:15.881565 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cb7b8d459-6mst6_calico-system(3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:15.881748 containerd[1641]: time="2025-11-05T04:49:15.881576780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:49:15.883029 kubelet[2811]: E1105 04:49:15.882995 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:49:15.943179 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018). Nov 5 04:49:15.998915 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:16.000668 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:16.005568 systemd-logind[1609]: New session 19 of user core. Nov 5 04:49:16.013198 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 04:49:16.088759 sshd[5269]: Connection closed by 10.0.0.1 port 56018 Nov 5 04:49:16.089079 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:16.093452 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:56018.service: Deactivated successfully. Nov 5 04:49:16.097363 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 04:49:16.098784 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Nov 5 04:49:16.100770 systemd-logind[1609]: Removed session 19. Nov 5 04:49:16.210541 containerd[1641]: time="2025-11-05T04:49:16.210412556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:16.211654 containerd[1641]: time="2025-11-05T04:49:16.211620426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:49:16.211731 containerd[1641]: time="2025-11-05T04:49:16.211671373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:16.211811 kubelet[2811]: E1105 04:49:16.211773 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:49:16.211888 kubelet[2811]: E1105 04:49:16.211813 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:49:16.211969 kubelet[2811]: E1105 04:49:16.211927 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dkcnl_calico-system(84d99f8c-4e0f-4dac-8f92-d3c8b82ac971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:16.213130 kubelet[2811]: E1105 04:49:16.213102 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:49:18.528963 containerd[1641]: time="2025-11-05T04:49:18.528914632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:49:18.878207 containerd[1641]: time="2025-11-05T04:49:18.878070805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:49:18.879348 containerd[1641]: time="2025-11-05T04:49:18.879315643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:49:18.879431 containerd[1641]: time="2025-11-05T04:49:18.879368764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:49:18.879574 kubelet[2811]: E1105 04:49:18.879523 2811 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:49:18.879929 kubelet[2811]: E1105 04:49:18.879585 2811 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:49:18.879929 kubelet[2811]: E1105 04:49:18.879730 2811 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhdgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ms88g_calico-system(c8a4b2e2-09ee-4112-a2df-31acb1eaedf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:49:18.880927 kubelet[2811]: E1105 04:49:18.880899 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:49:21.101562 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:48784.service - OpenSSH per-connection server daemon (10.0.0.1:48784). Nov 5 04:49:21.156053 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 48784 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:21.157436 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:21.161636 systemd-logind[1609]: New session 20 of user core. Nov 5 04:49:21.173143 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 04:49:21.243591 sshd[5285]: Connection closed by 10.0.0.1 port 48784 Nov 5 04:49:21.243965 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:21.255932 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:48784.service: Deactivated successfully. Nov 5 04:49:21.257953 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 04:49:21.258873 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Nov 5 04:49:21.261929 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:48792.service - OpenSSH per-connection server daemon (10.0.0.1:48792). Nov 5 04:49:21.262796 systemd-logind[1609]: Removed session 20. Nov 5 04:49:21.328321 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 48792 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:21.330309 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:21.337250 systemd-logind[1609]: New session 21 of user core. Nov 5 04:49:21.346171 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 04:49:21.528723 kubelet[2811]: E1105 04:49:21.528656 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:49:21.701552 sshd[5302]: Connection closed by 10.0.0.1 port 48792 Nov 5 04:49:21.701998 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:21.716877 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:48792.service: Deactivated successfully. Nov 5 04:49:21.719030 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 04:49:21.719759 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Nov 5 04:49:21.722684 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:48802.service - OpenSSH per-connection server daemon (10.0.0.1:48802). Nov 5 04:49:21.723997 systemd-logind[1609]: Removed session 21. Nov 5 04:49:21.781912 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 48802 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:21.783615 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:21.788112 systemd-logind[1609]: New session 22 of user core. Nov 5 04:49:21.803111 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 04:49:22.530008 kubelet[2811]: E1105 04:49:22.529927 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:49:22.673001 sshd[5317]: Connection closed by 10.0.0.1 port 48802 Nov 5 04:49:22.671337 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:22.685187 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:48802.service: Deactivated successfully. Nov 5 04:49:22.690558 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 04:49:22.692892 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Nov 5 04:49:22.696105 systemd-logind[1609]: Removed session 22. Nov 5 04:49:22.697953 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:48806.service - OpenSSH per-connection server daemon (10.0.0.1:48806). Nov 5 04:49:22.764997 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 48806 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:22.766518 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:22.771401 systemd-logind[1609]: New session 23 of user core. Nov 5 04:49:22.785158 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 04:49:22.973007 sshd[5342]: Connection closed by 10.0.0.1 port 48806 Nov 5 04:49:22.972453 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:22.981233 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:48806.service: Deactivated successfully. Nov 5 04:49:22.983941 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 04:49:22.985214 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Nov 5 04:49:22.988698 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:48822.service - OpenSSH per-connection server daemon (10.0.0.1:48822). Nov 5 04:49:22.989908 systemd-logind[1609]: Removed session 23. Nov 5 04:49:23.040124 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 48822 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:23.043109 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:23.050880 systemd-logind[1609]: New session 24 of user core. Nov 5 04:49:23.054131 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 04:49:23.173550 sshd[5357]: Connection closed by 10.0.0.1 port 48822 Nov 5 04:49:23.173933 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:23.179493 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:48822.service: Deactivated successfully. Nov 5 04:49:23.181748 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 04:49:23.183172 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Nov 5 04:49:23.184911 systemd-logind[1609]: Removed session 24. Nov 5 04:49:24.528228 kubelet[2811]: E1105 04:49:24.528135 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:25.044295 update_engine[1613]: I20251105 04:49:25.044169 1613 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 04:49:25.044295 update_engine[1613]: I20251105 04:49:25.044278 1613 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 04:49:25.045606 update_engine[1613]: I20251105 04:49:25.045521 1613 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 04:49:25.046248 update_engine[1613]: I20251105 04:49:25.046205 1613 omaha_request_params.cc:62] Current group set to developer Nov 5 04:49:25.046452 update_engine[1613]: I20251105 04:49:25.046419 1613 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 04:49:25.046452 update_engine[1613]: I20251105 04:49:25.046439 1613 update_attempter.cc:643] Scheduling an action processor start. Nov 5 04:49:25.046498 update_engine[1613]: I20251105 04:49:25.046463 1613 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 04:49:25.046565 update_engine[1613]: I20251105 04:49:25.046535 1613 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 04:49:25.046665 update_engine[1613]: I20251105 04:49:25.046634 1613 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 04:49:25.046665 update_engine[1613]: I20251105 04:49:25.046653 1613 omaha_request_action.cc:272] Request: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: Nov 5 04:49:25.046665 update_engine[1613]: I20251105 04:49:25.046661 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 04:49:25.056927 update_engine[1613]: I20251105 04:49:25.056869 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 04:49:25.057705 update_engine[1613]: I20251105 04:49:25.057619 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 04:49:25.063284 locksmithd[1660]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 04:49:25.101187 update_engine[1613]: E20251105 04:49:25.101131 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 04:49:25.101282 update_engine[1613]: I20251105 04:49:25.101249 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 04:49:27.529241 kubelet[2811]: E1105 04:49:27.529099 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7cb7b8d459-6mst6" podUID="3f6a71f3-a4bf-43c3-9766-dfdeb3e1d903" Nov 5 04:49:28.187160 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:48838.service - OpenSSH per-connection server daemon (10.0.0.1:48838). Nov 5 04:49:28.261740 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 48838 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:28.263324 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:28.268471 systemd-logind[1609]: New session 25 of user core. Nov 5 04:49:28.279116 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 04:49:28.359435 sshd[5378]: Connection closed by 10.0.0.1 port 48838 Nov 5 04:49:28.359796 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:28.364310 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:48838.service: Deactivated successfully. Nov 5 04:49:28.366425 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 04:49:28.367356 systemd-logind[1609]: Session 25 logged out. Waiting for processes to exit. Nov 5 04:49:28.368514 systemd-logind[1609]: Removed session 25. Nov 5 04:49:28.528738 kubelet[2811]: E1105 04:49:28.528650 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-654c7d6777-9wfvg" podUID="dc90951e-eb89-47bd-8fb2-3712d2db3fd5" Nov 5 04:49:29.154023 kubelet[2811]: E1105 04:49:29.153892 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:29.529904 kubelet[2811]: E1105 04:49:29.529582 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ms88g" podUID="c8a4b2e2-09ee-4112-a2df-31acb1eaedf9" Nov 5 04:49:31.529132 kubelet[2811]: E1105 04:49:31.529061 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dkcnl" podUID="84d99f8c-4e0f-4dac-8f92-d3c8b82ac971" Nov 5 04:49:33.373133 systemd[1]: Started sshd@25-10.0.0.41:22-10.0.0.1:48270.service - OpenSSH per-connection server daemon (10.0.0.1:48270). Nov 5 04:49:33.439298 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 48270 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:33.440826 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:33.445543 systemd-logind[1609]: New session 26 of user core. Nov 5 04:49:33.452241 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 04:49:33.532378 sshd[5419]: Connection closed by 10.0.0.1 port 48270 Nov 5 04:49:33.532701 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:33.538135 systemd[1]: sshd@25-10.0.0.41:22-10.0.0.1:48270.service: Deactivated successfully. Nov 5 04:49:33.540432 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 04:49:33.541261 systemd-logind[1609]: Session 26 logged out. Waiting for processes to exit. Nov 5 04:49:33.542530 systemd-logind[1609]: Removed session 26. Nov 5 04:49:34.528323 kubelet[2811]: E1105 04:49:34.528270 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-5rvkg" podUID="63b223a4-aa0e-4b7a-9e9b-ebfedd74f920" Nov 5 04:49:35.010168 update_engine[1613]: I20251105 04:49:35.010082 1613 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 04:49:35.010628 update_engine[1613]: I20251105 04:49:35.010205 1613 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 04:49:35.010655 update_engine[1613]: I20251105 04:49:35.010623 1613 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 04:49:35.023231 update_engine[1613]: E20251105 04:49:35.023089 1613 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 04:49:35.023231 update_engine[1613]: I20251105 04:49:35.023200 1613 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 04:49:36.529057 kubelet[2811]: E1105 04:49:36.528871 2811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cd54dc478-f2jvn" podUID="d81093e5-51dd-4f5e-ba7d-dad72d581a2a" Nov 5 04:49:37.528903 kubelet[2811]: E1105 04:49:37.528853 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:38.549998 systemd[1]: Started sshd@26-10.0.0.41:22-10.0.0.1:48278.service - OpenSSH per-connection server daemon (10.0.0.1:48278). Nov 5 04:49:38.618250 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 48278 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:38.620188 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:38.627429 systemd-logind[1609]: New session 27 of user core. Nov 5 04:49:38.636271 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 04:49:38.725165 sshd[5435]: Connection closed by 10.0.0.1 port 48278 Nov 5 04:49:38.725523 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:38.731261 systemd[1]: sshd@26-10.0.0.41:22-10.0.0.1:48278.service: Deactivated successfully. Nov 5 04:49:38.734194 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 04:49:38.735086 systemd-logind[1609]: Session 27 logged out. Waiting for processes to exit. Nov 5 04:49:38.736891 systemd-logind[1609]: Removed session 27.