Nov 24 00:22:52.896569 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:54:38 -00 2025 Nov 24 00:22:52.896606 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:52.896620 kernel: BIOS-provided physical RAM map: Nov 24 00:22:52.896629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 00:22:52.896637 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 00:22:52.896646 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 00:22:52.896656 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 24 00:22:52.896665 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 24 00:22:52.896677 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 00:22:52.896686 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 00:22:52.896694 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 00:22:52.896706 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 00:22:52.896715 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 00:22:52.896724 kernel: NX (Execute Disable) protection: active Nov 24 00:22:52.896734 kernel: APIC: Static calls initialized Nov 24 00:22:52.896743 kernel: SMBIOS 2.8 present. Nov 24 00:22:52.896759 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 24 00:22:52.896768 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:22:52.896777 kernel: Hypervisor detected: KVM Nov 24 00:22:52.896786 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 00:22:52.896796 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:22:52.896805 kernel: kvm-clock: using sched offset of 4314804759 cycles Nov 24 00:22:52.896815 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:22:52.896825 kernel: tsc: Detected 2794.748 MHz processor Nov 24 00:22:52.896835 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:22:52.896845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:22:52.896857 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 00:22:52.896869 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 00:22:52.896881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:22:52.896893 kernel: Using GB pages for direct mapping Nov 24 00:22:52.896906 kernel: ACPI: Early table checksum verification disabled Nov 24 00:22:52.896939 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 24 00:22:52.896951 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.896964 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.896976 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.896992 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 24 00:22:52.897004 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.897016 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.897029 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.897041 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:22:52.897058 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 24 00:22:52.897071 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 24 00:22:52.897080 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 24 00:22:52.897091 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 24 00:22:52.897101 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 24 00:22:52.897111 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 24 00:22:52.897121 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 24 00:22:52.897130 kernel: No NUMA configuration found Nov 24 00:22:52.897140 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 24 00:22:52.897153 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 24 00:22:52.897163 kernel: Zone ranges: Nov 24 00:22:52.897173 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:22:52.897183 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 24 00:22:52.897193 kernel: Normal empty Nov 24 00:22:52.897203 kernel: Device empty Nov 24 00:22:52.897213 kernel: Movable zone start for each node Nov 24 00:22:52.897223 kernel: Early memory node ranges Nov 24 00:22:52.897233 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 00:22:52.897243 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 24 00:22:52.897256 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 24 00:22:52.897266 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:22:52.897276 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 00:22:52.897286 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 24 00:22:52.897299 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 00:22:52.897309 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:22:52.897319 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:22:52.897342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 00:22:52.897364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:22:52.897379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:22:52.897389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:22:52.897400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:22:52.897410 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:22:52.897426 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:22:52.897436 kernel: TSC deadline timer available Nov 24 00:22:52.897446 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:22:52.897456 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:22:52.897466 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:22:52.897480 kernel: CPU topo: Max. threads per core: 1 Nov 24 00:22:52.897490 kernel: CPU topo: Num. cores per package: 4 Nov 24 00:22:52.897500 kernel: CPU topo: Num. threads per package: 4 Nov 24 00:22:52.897510 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 24 00:22:52.897520 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:22:52.897530 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 00:22:52.897540 kernel: kvm-guest: setup PV sched yield Nov 24 00:22:52.897550 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 00:22:52.897560 kernel: Booting paravirtualized kernel on KVM Nov 24 00:22:52.897571 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:22:52.897584 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 24 00:22:52.897603 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 24 00:22:52.897614 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 24 00:22:52.897657 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 24 00:22:52.897673 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:22:52.897682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:22:52.897691 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:52.897699 kernel: random: crng init done Nov 24 00:22:52.897713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:22:52.897720 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:22:52.897728 kernel: Fallback order for Node 0: 0 Nov 24 00:22:52.897735 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 24 00:22:52.897743 kernel: Policy zone: DMA32 Nov 24 00:22:52.897750 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:22:52.897758 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 24 00:22:52.897766 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:22:52.897773 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:22:52.897783 kernel: Dynamic Preempt: voluntary Nov 24 00:22:52.897790 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:22:52.897801 kernel: rcu: RCU event tracing is enabled. Nov 24 00:22:52.897817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 24 00:22:52.897831 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:22:52.897848 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:22:52.897861 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:22:52.897873 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:22:52.897882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 24 00:22:52.897895 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:22:52.897905 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:22:52.897937 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 00:22:52.897944 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 24 00:22:52.897952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:22:52.897968 kernel: Console: colour VGA+ 80x25 Nov 24 00:22:52.897978 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:22:52.897986 kernel: ACPI: Core revision 20240827 Nov 24 00:22:52.897994 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 00:22:52.898002 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:22:52.898010 kernel: x2apic enabled Nov 24 00:22:52.898018 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:22:52.898031 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 00:22:52.898039 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 00:22:52.898047 kernel: kvm-guest: setup PV IPIs Nov 24 00:22:52.898055 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 00:22:52.898063 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 24 00:22:52.898073 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 24 00:22:52.898081 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:22:52.898089 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 00:22:52.898097 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 00:22:52.898105 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:22:52.898113 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:22:52.898121 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:22:52.898129 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 24 00:22:52.898138 kernel: active return thunk: retbleed_return_thunk Nov 24 00:22:52.898146 kernel: RETBleed: Mitigation: untrained return thunk Nov 24 00:22:52.898155 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 00:22:52.898162 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 00:22:52.898170 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 00:22:52.898179 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 00:22:52.898187 kernel: active return thunk: srso_return_thunk Nov 24 00:22:52.898195 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 00:22:52.898203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:22:52.898213 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:22:52.898221 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:22:52.898229 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:22:52.898237 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 24 00:22:52.898244 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:22:52.898252 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:22:52.898260 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:22:52.898268 kernel: landlock: Up and running. Nov 24 00:22:52.898276 kernel: SELinux: Initializing. Nov 24 00:22:52.898288 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:22:52.898296 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:22:52.898304 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 24 00:22:52.898312 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 00:22:52.898320 kernel: ... version: 0 Nov 24 00:22:52.898327 kernel: ... bit width: 48 Nov 24 00:22:52.898335 kernel: ... generic registers: 6 Nov 24 00:22:52.898343 kernel: ... value mask: 0000ffffffffffff Nov 24 00:22:52.898351 kernel: ... max period: 00007fffffffffff Nov 24 00:22:52.898362 kernel: ... fixed-purpose events: 0 Nov 24 00:22:52.898369 kernel: ... event mask: 000000000000003f Nov 24 00:22:52.898377 kernel: signal: max sigframe size: 1776 Nov 24 00:22:52.898385 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:22:52.898393 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:22:52.898400 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:22:52.898408 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:22:52.898416 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:22:52.898424 kernel: .... node #0, CPUs: #1 #2 #3 Nov 24 00:22:52.898434 kernel: smp: Brought up 1 node, 4 CPUs Nov 24 00:22:52.898441 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 24 00:22:52.898450 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145096K reserved, 0K cma-reserved) Nov 24 00:22:52.898457 kernel: devtmpfs: initialized Nov 24 00:22:52.898465 kernel: x86/mm: Memory block size: 128MB Nov 24 00:22:52.898477 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:22:52.898485 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 24 00:22:52.898493 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:22:52.898501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:22:52.898511 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:22:52.898519 kernel: audit: type=2000 audit(1763943770.315:1): state=initialized audit_enabled=0 res=1 Nov 24 00:22:52.898526 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:22:52.898534 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:22:52.898542 kernel: cpuidle: using governor menu Nov 24 00:22:52.898550 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:22:52.898557 kernel: dca service started, version 1.12.1 Nov 24 00:22:52.898565 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 00:22:52.898573 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 00:22:52.898583 kernel: PCI: Using configuration type 1 for base access Nov 24 00:22:52.898598 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:22:52.898606 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:22:52.898614 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:22:52.898622 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:22:52.898629 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:22:52.898637 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:22:52.898645 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:22:52.898653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:22:52.898663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:22:52.898671 kernel: ACPI: Interpreter enabled Nov 24 00:22:52.898679 kernel: ACPI: PM: (supports S0 S3 S5) Nov 24 00:22:52.898687 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:22:52.898695 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:22:52.898702 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:22:52.898710 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 00:22:52.898718 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:22:52.898992 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:22:52.899145 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 00:22:52.899268 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 00:22:52.899278 kernel: PCI host bridge to bus 0000:00 Nov 24 00:22:52.899420 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:22:52.899540 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:22:52.899665 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:22:52.899796 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 24 00:22:52.899938 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 00:22:52.900057 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 24 00:22:52.900170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:22:52.900326 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:22:52.900488 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:22:52.900630 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 24 00:22:52.900766 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 24 00:22:52.900939 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 24 00:22:52.901064 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:22:52.901220 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 00:22:52.901353 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 24 00:22:52.901477 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 24 00:22:52.901619 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 24 00:22:52.901762 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 00:22:52.901897 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 24 00:22:52.902108 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 24 00:22:52.902238 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 24 00:22:52.902388 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 00:22:52.902513 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 24 00:22:52.902657 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 24 00:22:52.902783 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 24 00:22:52.902966 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 24 00:22:52.903117 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:22:52.903245 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 00:22:52.903382 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 00:22:52.903518 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 24 00:22:52.903651 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 24 00:22:52.903798 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 00:22:52.903974 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 00:22:52.903990 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:22:52.904000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:22:52.904011 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:22:52.904021 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:22:52.904036 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 00:22:52.904045 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 00:22:52.904056 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 00:22:52.904066 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 00:22:52.904076 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 00:22:52.904084 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 00:22:52.904092 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 00:22:52.904100 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 00:22:52.904108 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 00:22:52.904119 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 00:22:52.904127 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 00:22:52.904135 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 00:22:52.904143 kernel: iommu: Default domain type: Translated Nov 24 00:22:52.904151 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:22:52.904159 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:22:52.904167 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:22:52.904175 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 00:22:52.904295 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 24 00:22:52.904421 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 00:22:52.904543 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 00:22:52.904676 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:22:52.904686 kernel: vgaarb: loaded Nov 24 00:22:52.904695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 00:22:52.904704 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 00:22:52.904712 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:22:52.904720 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:22:52.904731 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:22:52.904740 kernel: pnp: PnP ACPI init Nov 24 00:22:52.904931 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 00:22:52.904944 kernel: pnp: PnP ACPI: found 6 devices Nov 24 00:22:52.904952 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:22:52.904961 kernel: NET: Registered PF_INET protocol family Nov 24 00:22:52.904969 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:22:52.904977 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 00:22:52.904986 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:22:52.904998 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:22:52.905006 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 00:22:52.905014 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 00:22:52.905022 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:22:52.905030 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:22:52.905038 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:22:52.905046 kernel: NET: Registered PF_XDP protocol family Nov 24 00:22:52.905162 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:22:52.905278 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:22:52.905398 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:22:52.905516 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 24 00:22:52.905645 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 00:22:52.905764 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 24 00:22:52.905776 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:22:52.905786 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 24 00:22:52.905796 kernel: Initialise system trusted keyrings Nov 24 00:22:52.905805 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 00:22:52.905819 kernel: Key type asymmetric registered Nov 24 00:22:52.905828 kernel: Asymmetric key parser 'x509' registered Nov 24 00:22:52.905838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:22:52.905847 kernel: io scheduler mq-deadline registered Nov 24 00:22:52.905857 kernel: io scheduler kyber registered Nov 24 00:22:52.905868 kernel: io scheduler bfq registered Nov 24 00:22:52.905879 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:22:52.905892 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 00:22:52.905903 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 00:22:52.905935 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 24 00:22:52.905947 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:22:52.905959 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:22:52.905971 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:22:52.905983 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:22:52.905994 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:22:52.906175 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 24 00:22:52.906189 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:22:52.906338 kernel: rtc_cmos 00:04: registered as rtc0 Nov 24 00:22:52.906463 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T00:22:52 UTC (1763943772) Nov 24 00:22:52.906585 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 24 00:22:52.906606 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 00:22:52.906616 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:22:52.906625 kernel: Segment Routing with IPv6 Nov 24 00:22:52.906635 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:22:52.906644 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:22:52.906654 kernel: Key type dns_resolver registered Nov 24 00:22:52.906668 kernel: IPI shorthand broadcast: enabled Nov 24 00:22:52.906677 kernel: sched_clock: Marking stable (3167001979, 261495299)->(3507875298, -79378020) Nov 24 00:22:52.906686 kernel: registered taskstats version 1 Nov 24 00:22:52.906696 kernel: Loading compiled-in X.509 certificates Nov 24 00:22:52.906705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 5d380f93d180914be04be8068ab300f495c35900' Nov 24 00:22:52.906715 kernel: Demotion targets for Node 0: null Nov 24 00:22:52.906724 kernel: Key type .fscrypt registered Nov 24 00:22:52.906733 kernel: Key type fscrypt-provisioning registered Nov 24 00:22:52.906743 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:22:52.906755 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:22:52.906764 kernel: ima: No architecture policies found Nov 24 00:22:52.906773 kernel: clk: Disabling unused clocks Nov 24 00:22:52.906783 kernel: Warning: unable to open an initial console. Nov 24 00:22:52.906793 kernel: Freeing unused kernel image (initmem) memory: 46188K Nov 24 00:22:52.906802 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:22:52.906812 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:22:52.906822 kernel: Run /init as init process Nov 24 00:22:52.906831 kernel: with arguments: Nov 24 00:22:52.906843 kernel: /init Nov 24 00:22:52.906852 kernel: with environment: Nov 24 00:22:52.906861 kernel: HOME=/ Nov 24 00:22:52.906871 kernel: TERM=linux Nov 24 00:22:52.906881 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:22:52.906895 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:22:52.906950 systemd[1]: Detected virtualization kvm. Nov 24 00:22:52.906960 systemd[1]: Detected architecture x86-64. Nov 24 00:22:52.906970 systemd[1]: Running in initrd. Nov 24 00:22:52.906980 systemd[1]: No hostname configured, using default hostname. Nov 24 00:22:52.906991 systemd[1]: Hostname set to . Nov 24 00:22:52.907000 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:22:52.907010 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:22:52.907023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:22:52.907034 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:22:52.907045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:22:52.907055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:22:52.907065 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:22:52.907077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:22:52.907089 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:22:52.907102 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:22:52.907112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:22:52.907123 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:22:52.907133 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:22:52.907143 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:22:52.907153 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:22:52.907164 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:22:52.907174 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:22:52.907184 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:22:52.907197 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:22:52.907208 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:22:52.907218 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:22:52.907228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:22:52.907239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:22:52.907249 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:22:52.907259 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:22:52.907272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:22:52.907283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:22:52.907294 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:22:52.907304 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:22:52.907314 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:22:52.907324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:22:52.907337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:52.907347 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:22:52.907358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:22:52.907369 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:22:52.907379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:22:52.907467 systemd-journald[200]: Collecting audit messages is disabled. Nov 24 00:22:52.907497 systemd-journald[200]: Journal started Nov 24 00:22:52.907523 systemd-journald[200]: Runtime Journal (/run/log/journal/83fd130d17f64808942a6dd8875cb4c0) is 6M, max 48.3M, 42.2M free. Nov 24 00:22:52.895179 systemd-modules-load[201]: Inserted module 'overlay' Nov 24 00:22:52.910981 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:22:52.912184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:22:52.915683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:22:52.923692 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:22:52.930995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:22:52.932600 systemd-modules-load[201]: Inserted module 'br_netfilter' Nov 24 00:22:53.001341 kernel: Bridge firewalling registered Nov 24 00:22:52.939176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:22:53.009194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:53.010725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:22:53.017052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:22:53.019663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:22:53.032794 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:22:53.036358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:22:53.038223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:22:53.041902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:22:53.060032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:22:53.063374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:22:53.100539 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:22:53.109659 systemd-resolved[234]: Positive Trust Anchors: Nov 24 00:22:53.109674 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:22:53.109712 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:22:53.112987 systemd-resolved[234]: Defaulting to hostname 'linux'. Nov 24 00:22:53.114502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:22:53.127710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:22:53.253974 kernel: SCSI subsystem initialized Nov 24 00:22:53.263945 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:22:53.275964 kernel: iscsi: registered transport (tcp) Nov 24 00:22:53.303941 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:22:53.303968 kernel: QLogic iSCSI HBA Driver Nov 24 00:22:53.328561 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:22:53.347486 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:22:53.348710 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:22:53.422594 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:22:53.425280 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:22:53.494955 kernel: raid6: avx2x4 gen() 29783 MB/s Nov 24 00:22:53.511942 kernel: raid6: avx2x2 gen() 30631 MB/s Nov 24 00:22:53.529674 kernel: raid6: avx2x1 gen() 25664 MB/s Nov 24 00:22:53.529705 kernel: raid6: using algorithm avx2x2 gen() 30631 MB/s Nov 24 00:22:53.547820 kernel: raid6: .... xor() 19116 MB/s, rmw enabled Nov 24 00:22:53.547991 kernel: raid6: using avx2x2 recovery algorithm Nov 24 00:22:53.600952 kernel: xor: automatically using best checksumming function avx Nov 24 00:22:53.806963 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:22:53.815732 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:22:53.818885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:22:53.859352 systemd-udevd[455]: Using default interface naming scheme 'v255'. Nov 24 00:22:53.865680 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:22:53.867828 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:22:53.892509 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Nov 24 00:22:53.924373 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:22:53.926754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:22:54.014245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:22:54.020062 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:22:54.110961 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:22:54.126958 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 00:22:54.127031 kernel: AES CTR mode by8 optimization enabled Nov 24 00:22:54.145996 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 24 00:22:54.160495 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 24 00:22:54.163055 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:22:54.168431 kernel: GPT:9289727 != 19775487 Nov 24 00:22:54.168472 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:22:54.168489 kernel: GPT:9289727 != 19775487 Nov 24 00:22:54.168503 kernel: libata version 3.00 loaded. Nov 24 00:22:54.168518 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:22:54.168532 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:22:54.167760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:22:54.167863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:54.176026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:54.182587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:22:54.183967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:22:54.190933 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 00:22:54.198372 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 00:22:54.198416 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 00:22:54.198662 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 00:22:54.198861 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 00:22:54.211939 kernel: scsi host0: ahci Nov 24 00:22:54.213931 kernel: scsi host1: ahci Nov 24 00:22:54.215957 kernel: scsi host2: ahci Nov 24 00:22:54.217034 kernel: scsi host3: ahci Nov 24 00:22:54.219203 kernel: scsi host4: ahci Nov 24 00:22:54.219398 kernel: scsi host5: ahci Nov 24 00:22:54.222236 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Nov 24 00:22:54.222273 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Nov 24 00:22:54.222286 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Nov 24 00:22:54.222299 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Nov 24 00:22:54.222314 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Nov 24 00:22:54.222325 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Nov 24 00:22:54.234358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 00:22:54.258288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 00:22:54.328498 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 00:22:54.333637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 00:22:54.334479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:22:54.348935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 00:22:54.352792 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:22:54.535949 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 00:22:54.536021 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 00:22:54.537945 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 00:22:54.538949 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 00:22:54.539947 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 00:22:54.543386 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 24 00:22:54.543415 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 00:22:54.543447 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 24 00:22:54.593832 kernel: ata3.00: applying bridge limits Nov 24 00:22:54.595832 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 00:22:54.595854 kernel: ata3.00: configured for UDMA/100 Nov 24 00:22:54.598948 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 24 00:22:54.656090 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 24 00:22:54.656315 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 00:22:54.670960 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 24 00:22:54.711212 disk-uuid[617]: Primary Header is updated. Nov 24 00:22:54.711212 disk-uuid[617]: Secondary Entries is updated. Nov 24 00:22:54.711212 disk-uuid[617]: Secondary Header is updated. Nov 24 00:22:54.738029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:22:54.741940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:22:55.115246 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:22:55.117512 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:22:55.120823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:22:55.122770 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:22:55.127285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:22:55.164502 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:22:55.843931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 00:22:55.844048 disk-uuid[619]: The operation has completed successfully. Nov 24 00:22:55.883407 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:22:55.883540 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:22:55.915744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:22:55.944434 sh[647]: Success Nov 24 00:22:55.966850 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:22:55.966892 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:22:55.968941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:22:55.978930 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:22:56.009702 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:22:56.012354 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:22:56.032043 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:22:56.041475 kernel: BTRFS: device fsid c993ebd2-0e38-4cfc-8615-2c75294bea72 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (659) Nov 24 00:22:56.041498 kernel: BTRFS info (device dm-0): first mount of filesystem c993ebd2-0e38-4cfc-8615-2c75294bea72 Nov 24 00:22:56.041517 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:56.046954 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:22:56.046976 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:22:56.048256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:22:56.051531 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:22:56.055391 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:22:56.056365 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:22:56.062101 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:22:56.087952 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (688) Nov 24 00:22:56.088006 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:56.090583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:56.094539 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:22:56.094607 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:22:56.100956 kernel: BTRFS info (device vda6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:56.102487 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:22:56.104643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:22:56.213616 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:22:56.219667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:22:56.224167 ignition[733]: Ignition 2.22.0 Nov 24 00:22:56.224179 ignition[733]: Stage: fetch-offline Nov 24 00:22:56.224222 ignition[733]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:22:56.224232 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:22:56.224352 ignition[733]: parsed url from cmdline: "" Nov 24 00:22:56.224355 ignition[733]: no config URL provided Nov 24 00:22:56.224361 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:22:56.224369 ignition[733]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:22:56.224401 ignition[733]: op(1): [started] loading QEMU firmware config module Nov 24 00:22:56.224406 ignition[733]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 24 00:22:56.236113 ignition[733]: op(1): [finished] loading QEMU firmware config module Nov 24 00:22:56.269065 systemd-networkd[835]: lo: Link UP Nov 24 00:22:56.269076 systemd-networkd[835]: lo: Gained carrier Nov 24 00:22:56.270662 systemd-networkd[835]: Enumeration completed Nov 24 00:22:56.270737 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:22:56.271663 systemd[1]: Reached target network.target - Network. Nov 24 00:22:56.272384 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:22:56.272389 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:22:56.273706 systemd-networkd[835]: eth0: Link UP Nov 24 00:22:56.273848 systemd-networkd[835]: eth0: Gained carrier Nov 24 00:22:56.273857 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:22:56.301954 systemd-networkd[835]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 00:22:56.334735 ignition[733]: parsing config with SHA512: 1a25c0dd21b8d7e0bd7ad4596eec8735cce913db6e66e0acb3ecc8e20afde4173ddc2ed89f8fa8209ba529f1136fd7103e6b902a16c148ec433682c3b76c628b Nov 24 00:22:56.338561 unknown[733]: fetched base config from "system" Nov 24 00:22:56.338574 unknown[733]: fetched user config from "qemu" Nov 24 00:22:56.338931 ignition[733]: fetch-offline: fetch-offline passed Nov 24 00:22:56.338995 ignition[733]: Ignition finished successfully Nov 24 00:22:56.344286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:22:56.345478 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 24 00:22:56.346588 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:22:56.391948 ignition[842]: Ignition 2.22.0 Nov 24 00:22:56.391966 ignition[842]: Stage: kargs Nov 24 00:22:56.392108 ignition[842]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:22:56.392118 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:22:56.392930 ignition[842]: kargs: kargs passed Nov 24 00:22:56.392984 ignition[842]: Ignition finished successfully Nov 24 00:22:56.399506 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:22:56.401686 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:22:56.445326 ignition[850]: Ignition 2.22.0 Nov 24 00:22:56.445340 ignition[850]: Stage: disks Nov 24 00:22:56.445476 ignition[850]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:22:56.445488 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:22:56.446230 ignition[850]: disks: disks passed Nov 24 00:22:56.506759 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:22:56.446277 ignition[850]: Ignition finished successfully Nov 24 00:22:56.508247 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:22:56.511592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:22:56.512399 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:22:56.517855 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:22:56.521457 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:22:56.525278 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:22:56.552105 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.104 Nov 24 00:22:56.552117 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Nov 24 00:22:56.553732 systemd-fsck[859]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:22:57.528693 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:22:57.530941 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:22:57.657936 kernel: EXT4-fs (vda9): mounted filesystem 5d9d0447-100f-4769-adb5-76fdba966eb2 r/w with ordered data mode. Quota mode: none. Nov 24 00:22:57.658058 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:22:57.661230 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:22:57.752758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:22:57.755704 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:22:57.757336 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:22:57.757385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:22:57.757409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:22:57.773568 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:22:57.777793 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:22:57.787023 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Nov 24 00:22:57.787052 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:57.787066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:57.787080 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:22:57.787093 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:22:57.790317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:22:57.851403 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:22:57.974534 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:22:57.979294 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:22:57.984960 initrd-setup-root[912]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:22:58.080568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:22:58.096324 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:22:58.099803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:22:58.118226 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:22:58.120821 kernel: BTRFS info (device vda6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:58.137213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:22:58.174622 ignition[981]: INFO : Ignition 2.22.0 Nov 24 00:22:58.174622 ignition[981]: INFO : Stage: mount Nov 24 00:22:58.246456 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:22:58.246456 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:22:58.246456 ignition[981]: INFO : mount: mount passed Nov 24 00:22:58.246456 ignition[981]: INFO : Ignition finished successfully Nov 24 00:22:58.255224 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:22:58.258617 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:22:58.306185 systemd-networkd[835]: eth0: Gained IPv6LL Nov 24 00:22:58.660088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:22:58.723266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Nov 24 00:22:58.723295 kernel: BTRFS info (device vda6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:22:58.723307 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:22:58.728294 kernel: BTRFS info (device vda6): turning on async discard Nov 24 00:22:58.728313 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 00:22:58.730152 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:22:58.781874 ignition[1010]: INFO : Ignition 2.22.0 Nov 24 00:22:58.781874 ignition[1010]: INFO : Stage: files Nov 24 00:22:58.784557 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:22:58.784557 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:22:58.784557 ignition[1010]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:22:58.784557 ignition[1010]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:22:58.784557 ignition[1010]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:22:58.795107 ignition[1010]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:22:58.795107 ignition[1010]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:22:58.795107 ignition[1010]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:22:58.789399 unknown[1010]: wrote ssh authorized keys file for user: core Nov 24 00:22:58.803301 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:22:58.803301 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:22:58.843982 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:22:58.931008 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:22:58.934336 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:22:59.277501 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:22:59.280618 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:22:59.280618 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:22:59.514693 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:22:59.514693 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:22:59.522570 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:22:59.900975 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:23:00.279441 ignition[1010]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:23:00.279441 ignition[1010]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:23:00.285192 ignition[1010]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:23:00.682076 ignition[1010]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:23:00.682076 ignition[1010]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:23:00.682076 ignition[1010]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 24 00:23:00.690469 ignition[1010]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 00:23:00.690469 ignition[1010]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 00:23:00.690469 ignition[1010]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 24 00:23:00.690469 ignition[1010]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 24 00:23:00.712768 ignition[1010]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 00:23:00.721149 ignition[1010]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:23:00.723810 ignition[1010]: INFO : files: files passed Nov 24 00:23:00.723810 ignition[1010]: INFO : Ignition finished successfully Nov 24 00:23:00.734107 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:23:00.740213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:23:00.742139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:23:00.762978 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:23:00.763159 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:23:00.770514 initrd-setup-root-after-ignition[1039]: grep: /sysroot/oem/oem-release: No such file or directory Nov 24 00:23:00.776181 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:00.776181 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:00.781476 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:23:00.785724 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:23:00.786582 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:23:00.787809 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:23:00.869107 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:23:00.869242 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:23:00.873280 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:23:00.873655 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:23:00.878651 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:23:00.879668 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:23:00.911321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:23:00.913835 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:23:00.938314 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:23:00.939324 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:23:00.942512 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:23:00.946843 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:23:00.946992 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:23:00.951448 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:23:00.952437 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:23:00.952981 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:23:00.959393 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:23:00.960546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:23:00.967586 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:23:00.970994 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:23:00.971850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:23:00.977498 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:23:00.981208 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:23:00.982977 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:23:00.983598 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:23:00.983742 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:23:00.992267 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:23:00.993484 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:23:01.000612 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:23:01.002230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:23:01.003327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:23:01.003520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:23:01.009522 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:23:01.009698 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:23:01.012837 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:23:01.013600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:23:01.021006 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:23:01.021757 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:23:01.025850 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:23:01.028545 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:23:01.028644 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:23:01.031578 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:23:01.031667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:23:01.034056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:23:01.034170 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:23:01.036993 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:23:01.037095 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:23:01.041355 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:23:01.043312 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:23:01.043442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:23:01.059554 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:23:01.060478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:23:01.060605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:23:01.063051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:23:01.063179 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:23:01.075973 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:23:01.076089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:23:01.081146 ignition[1066]: INFO : Ignition 2.22.0 Nov 24 00:23:01.081146 ignition[1066]: INFO : Stage: umount Nov 24 00:23:01.081146 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:23:01.081146 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 00:23:01.081146 ignition[1066]: INFO : umount: umount passed Nov 24 00:23:01.081146 ignition[1066]: INFO : Ignition finished successfully Nov 24 00:23:01.083096 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:23:01.083277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:23:01.084610 systemd[1]: Stopped target network.target - Network. Nov 24 00:23:01.090024 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:23:01.090090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:23:01.091284 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:23:01.091340 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:23:01.094800 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:23:01.094862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:23:01.098630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:23:01.098679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:23:01.099779 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:23:01.103590 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:23:01.107826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:23:01.110222 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:23:01.110454 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:23:01.112080 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:23:01.112285 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:23:01.120714 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:23:01.121237 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:23:01.121470 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:23:01.127102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:23:01.129333 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:23:01.130776 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:23:01.130875 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:23:01.135666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:23:01.135738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:23:01.138130 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:23:01.143967 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:23:01.144040 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:23:01.144951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:23:01.145037 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:23:01.150817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:23:01.150875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:23:01.151756 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:23:01.151841 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:23:01.159503 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:23:01.163828 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:23:01.165143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:23:01.190860 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:23:01.191084 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:23:01.192056 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:23:01.192106 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:23:01.196807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:23:01.196848 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:23:01.200356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:23:01.200419 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:23:01.206393 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:23:01.206481 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:23:01.210493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:23:01.210564 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:23:01.218390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:23:01.219636 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:23:01.219705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:23:01.227003 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:23:01.227073 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:23:01.232468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:23:01.232524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:23:01.239266 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:23:01.239344 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:23:01.239416 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:23:01.239808 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:23:01.247168 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:23:01.256288 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:23:01.256429 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:23:01.259968 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:23:01.261561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:23:01.284187 systemd[1]: Switching root. Nov 24 00:23:01.332163 systemd-journald[200]: Journal stopped Nov 24 00:23:02.857138 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Nov 24 00:23:02.857210 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:23:02.857230 kernel: SELinux: policy capability open_perms=1 Nov 24 00:23:02.857245 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:23:02.857263 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:23:02.857274 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:23:02.857285 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:23:02.857297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:23:02.857308 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:23:02.857335 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:23:02.857347 kernel: audit: type=1403 audit(1763943781.980:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:23:02.857360 systemd[1]: Successfully loaded SELinux policy in 66.826ms. Nov 24 00:23:02.857387 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.695ms. Nov 24 00:23:02.857401 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:23:02.857419 systemd[1]: Detected virtualization kvm. Nov 24 00:23:02.857432 systemd[1]: Detected architecture x86-64. Nov 24 00:23:02.857443 systemd[1]: Detected first boot. Nov 24 00:23:02.857456 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:23:02.857467 zram_generator::config[1110]: No configuration found. Nov 24 00:23:02.857480 kernel: Guest personality initialized and is inactive Nov 24 00:23:02.857495 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:23:02.857506 kernel: Initialized host personality Nov 24 00:23:02.857517 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:23:02.857529 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:23:02.857542 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:23:02.857554 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:23:02.857568 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:23:02.857586 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:23:02.857599 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:23:02.857614 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:23:02.857626 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:23:02.857638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:23:02.857650 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:23:02.857662 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:23:02.857674 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:23:02.857686 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:23:02.857699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:23:02.857713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:23:02.857726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:23:02.857738 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:23:02.857750 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:23:02.857762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:23:02.857774 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:23:02.857786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:23:02.857799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:23:02.857813 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:23:02.857825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:23:02.857839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:23:02.857852 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:23:02.857867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:23:02.857879 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:23:02.857891 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:23:02.857904 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:23:02.857935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:23:02.857951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:23:02.857964 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:23:02.857976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:23:02.857988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:23:02.858000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:23:02.858012 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:23:02.858023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:23:02.858035 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:23:02.858047 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:23:02.858062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:02.858073 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:23:02.858085 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:23:02.858097 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:23:02.858110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:23:02.858122 systemd[1]: Reached target machines.target - Containers. Nov 24 00:23:02.858134 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:23:02.858148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:02.858162 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:23:02.858174 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:23:02.858186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:02.858198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:23:02.858210 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:02.858222 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:23:02.858275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:02.858288 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:23:02.858301 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:23:02.858315 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:23:02.858337 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:23:02.858350 kernel: loop: module loaded Nov 24 00:23:02.858361 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:23:02.858374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:02.858386 kernel: fuse: init (API version 7.41) Nov 24 00:23:02.858398 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:23:02.858410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:23:02.858422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:23:02.858437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:23:02.858449 kernel: ACPI: bus type drm_connector registered Nov 24 00:23:02.858462 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:23:02.858499 systemd-journald[1195]: Collecting audit messages is disabled. Nov 24 00:23:02.858523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:23:02.858538 systemd-journald[1195]: Journal started Nov 24 00:23:02.858560 systemd-journald[1195]: Runtime Journal (/run/log/journal/83fd130d17f64808942a6dd8875cb4c0) is 6M, max 48.3M, 42.2M free. Nov 24 00:23:02.544960 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:23:02.568237 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 00:23:02.568769 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:23:02.861948 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:23:02.862033 systemd[1]: Stopped verity-setup.service. Nov 24 00:23:02.867946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:02.872163 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:23:02.873954 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:23:02.875732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:23:02.877705 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:23:02.879439 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:23:02.881287 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:23:02.883170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:23:02.885056 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:23:02.887244 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:23:02.889526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:23:02.889765 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:23:02.891959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:02.892181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:02.894297 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:23:02.894529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:23:02.896505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:02.896727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:02.899240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:23:02.899477 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:23:02.901471 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:02.901690 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:02.903756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:23:02.905843 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:23:02.908151 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:23:02.910739 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:23:02.927947 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:23:02.931540 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:23:02.934609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:23:02.936571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:23:02.936607 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:23:02.939545 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:23:02.944983 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:23:02.947074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:02.950797 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:23:02.954603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:23:02.957029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:23:02.961038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:23:02.963310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:23:02.964849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:23:02.969830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:23:02.974864 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:23:02.979568 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:23:02.986903 systemd-journald[1195]: Time spent on flushing to /var/log/journal/83fd130d17f64808942a6dd8875cb4c0 is 18.270ms for 985 entries. Nov 24 00:23:02.986903 systemd-journald[1195]: System Journal (/var/log/journal/83fd130d17f64808942a6dd8875cb4c0) is 8M, max 195.6M, 187.6M free. Nov 24 00:23:03.024298 kernel: loop0: detected capacity change from 0 to 128560 Nov 24 00:23:03.024351 systemd-journald[1195]: Received client request to flush runtime journal. Nov 24 00:23:03.024384 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:23:02.981809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:23:02.983858 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:23:02.992005 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:23:02.998538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:23:03.004047 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:23:03.007417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:23:03.029267 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:23:03.041956 kernel: loop1: detected capacity change from 0 to 110984 Nov 24 00:23:03.042384 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:23:03.046571 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:23:03.060125 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:23:03.077937 kernel: loop2: detected capacity change from 0 to 229808 Nov 24 00:23:03.079932 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 24 00:23:03.079998 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 24 00:23:03.085187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:23:03.115957 kernel: loop3: detected capacity change from 0 to 128560 Nov 24 00:23:03.127977 kernel: loop4: detected capacity change from 0 to 110984 Nov 24 00:23:03.145949 kernel: loop5: detected capacity change from 0 to 229808 Nov 24 00:23:03.154074 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 24 00:23:03.154672 (sd-merge)[1252]: Merged extensions into '/usr'. Nov 24 00:23:03.161137 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:23:03.161164 systemd[1]: Reloading... Nov 24 00:23:03.248941 zram_generator::config[1277]: No configuration found. Nov 24 00:23:03.333748 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:23:03.457988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:23:03.458691 systemd[1]: Reloading finished in 296 ms. Nov 24 00:23:03.491593 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:23:03.494085 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:23:03.509343 systemd[1]: Starting ensure-sysext.service... Nov 24 00:23:03.511939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:23:03.523644 systemd[1]: Reload requested from client PID 1315 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:23:03.523664 systemd[1]: Reloading... Nov 24 00:23:03.536498 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:23:03.536549 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:23:03.536957 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:23:03.537346 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:23:03.538645 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:23:03.539094 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Nov 24 00:23:03.539205 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Nov 24 00:23:03.545207 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:23:03.545226 systemd-tmpfiles[1316]: Skipping /boot Nov 24 00:23:03.556630 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:23:03.556652 systemd-tmpfiles[1316]: Skipping /boot Nov 24 00:23:03.575943 zram_generator::config[1343]: No configuration found. Nov 24 00:23:03.766067 systemd[1]: Reloading finished in 242 ms. Nov 24 00:23:03.787127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:23:03.819065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:23:03.832671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:23:03.836739 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:23:03.841175 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:23:03.851276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:23:03.859575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:23:03.869167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:23:03.879667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:03.880240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:03.885963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:03.891726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:03.895770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:03.898042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:03.898183 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:03.907720 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:23:03.909639 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:03.911883 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:23:03.918179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:03.918850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:03.920651 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Nov 24 00:23:03.921868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:03.922118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:03.925110 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:03.925442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:03.944312 augenrules[1415]: No rules Nov 24 00:23:03.946532 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:23:03.946880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:23:03.950692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:03.951095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:03.955216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:03.958953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:03.964160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:03.966354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:03.966576 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:03.973072 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:23:03.975729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:03.977076 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:23:03.979340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:23:03.984214 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:23:03.987807 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:23:03.990873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:03.991320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:03.994219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:03.998822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:04.004810 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:04.005139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:04.023628 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:23:04.036093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:04.038675 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:23:04.040513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:23:04.046138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:23:04.051032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:23:04.062313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:23:04.071415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:23:04.074174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:23:04.074222 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:23:04.080513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:23:04.082714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:23:04.082802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:23:04.086395 systemd[1]: Finished ensure-sysext.service. Nov 24 00:23:04.094221 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:23:04.094788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:23:04.101625 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 00:23:04.104040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:23:04.104562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:23:04.108893 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:23:04.109175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:23:04.111660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:23:04.113104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:23:04.120927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:23:04.121002 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:23:04.136826 augenrules[1463]: /sbin/augenrules: No change Nov 24 00:23:04.138138 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:23:04.146603 systemd-resolved[1385]: Positive Trust Anchors: Nov 24 00:23:04.147010 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:23:04.147106 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:23:04.153682 systemd-resolved[1385]: Defaulting to hostname 'linux'. Nov 24 00:23:04.156694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:23:04.158956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:23:04.168039 augenrules[1496]: No rules Nov 24 00:23:04.170441 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:23:04.170722 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:23:04.213957 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:23:04.220043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 00:23:04.224167 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:23:04.252745 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 00:23:04.253962 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 00:23:04.255376 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:23:04.257173 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:23:04.258061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:23:04.258657 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:23:04.258951 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:23:04.259188 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:23:04.259218 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:23:04.259467 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:23:04.260456 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:23:04.260744 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:23:04.261126 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:23:04.272674 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:23:04.280393 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:23:04.283110 systemd-networkd[1470]: lo: Link UP Nov 24 00:23:04.283123 systemd-networkd[1470]: lo: Gained carrier Nov 24 00:23:04.283600 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:23:04.284392 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:23:04.284509 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:23:04.287990 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:23:04.288755 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:23:04.289873 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:23:04.292057 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:23:04.292238 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:23:04.292599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:23:04.292633 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:23:04.297143 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:23:04.300036 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:23:04.303025 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:23:04.304154 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:23:04.310115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:23:04.311769 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:23:04.317010 systemd-networkd[1470]: Enumeration completed Nov 24 00:23:04.317943 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:23:04.317987 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:23:04.317992 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:23:04.321068 systemd-networkd[1470]: eth0: Link UP Nov 24 00:23:04.321233 systemd-networkd[1470]: eth0: Gained carrier Nov 24 00:23:04.321256 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:23:04.322439 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:23:04.325298 jq[1526]: false Nov 24 00:23:04.326016 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:23:04.328638 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:23:04.332057 systemd-networkd[1470]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 00:23:04.332318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:23:04.335618 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Nov 24 00:23:04.335648 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Nov 24 00:23:04.336896 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:23:04.338013 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Nov 24 00:23:05.521424 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:23:05.521565 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 24 00:23:05.521617 systemd-resolved[1385]: Clock change detected. Flushing caches. Nov 24 00:23:05.521907 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:23:05.522656 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:23:05.524004 systemd-timesyncd[1477]: Initial clock synchronization to Mon 2025-11-24 00:23:05.521377 UTC. Nov 24 00:23:05.528675 oslogin_cache_refresh[1530]: Failure getting users, quitting Nov 24 00:23:05.530040 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Nov 24 00:23:05.530040 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:23:05.530040 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Nov 24 00:23:05.527088 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:23:05.528697 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:23:05.529604 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:23:05.528745 oslogin_cache_refresh[1530]: Refreshing group entry cache Nov 24 00:23:05.534096 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:23:05.537747 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Nov 24 00:23:05.537747 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:23:05.535838 oslogin_cache_refresh[1530]: Failure getting groups, quitting Nov 24 00:23:05.535850 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:23:05.538038 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:23:05.540402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:23:05.540701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:23:05.541040 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:23:05.541284 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:23:05.547601 update_engine[1540]: I20251124 00:23:05.547487 1540 main.cc:92] Flatcar Update Engine starting Nov 24 00:23:05.549718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:23:05.550013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:23:05.552240 extend-filesystems[1528]: Found /dev/vda6 Nov 24 00:23:05.554349 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:23:05.554628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:23:05.564532 extend-filesystems[1528]: Found /dev/vda9 Nov 24 00:23:05.567492 systemd[1]: Reached target network.target - Network. Nov 24 00:23:05.569822 jq[1542]: true Nov 24 00:23:05.571008 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:23:05.574127 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:23:05.577319 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:23:05.580129 extend-filesystems[1528]: Checking size of /dev/vda9 Nov 24 00:23:05.584442 tar[1547]: linux-amd64/LICENSE Nov 24 00:23:05.584442 tar[1547]: linux-amd64/helm Nov 24 00:23:05.593239 jq[1565]: true Nov 24 00:23:05.593448 extend-filesystems[1528]: Resized partition /dev/vda9 Nov 24 00:23:05.598041 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:23:05.605038 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 00:23:05.605334 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 00:23:05.611943 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 24 00:23:05.614662 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:23:05.622443 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:23:05.622831 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:23:05.622588 dbus-daemon[1524]: [system] SELinux support is enabled Nov 24 00:23:05.628871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:23:05.629519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:23:05.632201 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:23:05.632323 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:23:05.636748 update_engine[1540]: I20251124 00:23:05.636683 1540 update_check_scheduler.cc:74] Next update check in 2m13s Nov 24 00:23:05.659433 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 24 00:23:05.636901 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:23:05.641228 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:23:05.654141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:23:05.661058 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 00:23:05.661058 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 24 00:23:05.661058 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 24 00:23:05.669749 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Nov 24 00:23:05.665047 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:23:05.665328 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:23:05.680565 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:23:05.679338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:23:05.696616 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 00:23:05.806578 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:23:05.817250 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:23:05.817291 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:23:05.817719 systemd-logind[1539]: New seat seat0. Nov 24 00:23:05.821101 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:23:05.860235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:23:05.863449 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:23:05.879642 containerd[1577]: time="2025-11-24T00:23:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:23:05.880289 containerd[1577]: time="2025-11-24T00:23:05.880248643Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:23:05.891263 containerd[1577]: time="2025-11-24T00:23:05.891221260Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.912µs" Nov 24 00:23:05.891263 containerd[1577]: time="2025-11-24T00:23:05.891256446Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:23:05.891371 containerd[1577]: time="2025-11-24T00:23:05.891272717Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:23:05.891634 containerd[1577]: time="2025-11-24T00:23:05.891574833Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:23:05.891634 containerd[1577]: time="2025-11-24T00:23:05.891613175Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:23:05.891678 containerd[1577]: time="2025-11-24T00:23:05.891639475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:23:05.891741 containerd[1577]: time="2025-11-24T00:23:05.891715728Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:23:05.891741 containerd[1577]: time="2025-11-24T00:23:05.891735655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892015 containerd[1577]: time="2025-11-24T00:23:05.891989331Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892015 containerd[1577]: time="2025-11-24T00:23:05.892009719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892062 containerd[1577]: time="2025-11-24T00:23:05.892019898Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892062 containerd[1577]: time="2025-11-24T00:23:05.892028594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892205 containerd[1577]: time="2025-11-24T00:23:05.892146155Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892541 containerd[1577]: time="2025-11-24T00:23:05.892502894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892570 containerd[1577]: time="2025-11-24T00:23:05.892558298Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:23:05.892591 containerd[1577]: time="2025-11-24T00:23:05.892569569Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:23:05.892634 containerd[1577]: time="2025-11-24T00:23:05.892611988Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:23:05.892890 containerd[1577]: time="2025-11-24T00:23:05.892865824Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:23:05.893038 containerd[1577]: time="2025-11-24T00:23:05.892984527Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:23:05.897208 kernel: kvm_amd: TSC scaling supported Nov 24 00:23:05.897244 kernel: kvm_amd: Nested Virtualization enabled Nov 24 00:23:05.897299 kernel: kvm_amd: Nested Paging enabled Nov 24 00:23:05.897313 kernel: kvm_amd: LBR virtualization supported Nov 24 00:23:05.897326 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 24 00:23:05.897339 kernel: kvm_amd: Virtual GIF supported Nov 24 00:23:05.901433 containerd[1577]: time="2025-11-24T00:23:05.901382506Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:23:05.901473 containerd[1577]: time="2025-11-24T00:23:05.901443000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:23:05.901663 containerd[1577]: time="2025-11-24T00:23:05.901626193Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:23:05.901705 containerd[1577]: time="2025-11-24T00:23:05.901663764Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:23:05.901705 containerd[1577]: time="2025-11-24T00:23:05.901675766Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:23:05.901705 containerd[1577]: time="2025-11-24T00:23:05.901685645Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:23:05.901705 containerd[1577]: time="2025-11-24T00:23:05.901695884Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:23:05.901705 containerd[1577]: time="2025-11-24T00:23:05.901706945Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:23:05.901800 containerd[1577]: time="2025-11-24T00:23:05.901717955Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:23:05.901800 containerd[1577]: time="2025-11-24T00:23:05.901734757Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:23:05.901800 containerd[1577]: time="2025-11-24T00:23:05.901744345Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:23:05.901800 containerd[1577]: time="2025-11-24T00:23:05.901755436Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:23:05.901874 containerd[1577]: time="2025-11-24T00:23:05.901865863Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:23:05.901896 containerd[1577]: time="2025-11-24T00:23:05.901883366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:23:05.901929 containerd[1577]: time="2025-11-24T00:23:05.901895969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:23:05.901929 containerd[1577]: time="2025-11-24T00:23:05.901905607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:23:05.901976 containerd[1577]: time="2025-11-24T00:23:05.901938539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:23:05.901976 containerd[1577]: time="2025-11-24T00:23:05.901950722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:23:05.901976 containerd[1577]: time="2025-11-24T00:23:05.901961442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:23:05.901976 containerd[1577]: time="2025-11-24T00:23:05.901975669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:23:05.902058 containerd[1577]: time="2025-11-24T00:23:05.901987641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:23:05.902058 containerd[1577]: time="2025-11-24T00:23:05.901997710Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:23:05.902058 containerd[1577]: time="2025-11-24T00:23:05.902006917Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:23:05.902058 containerd[1577]: time="2025-11-24T00:23:05.902051691Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:23:05.902129 containerd[1577]: time="2025-11-24T00:23:05.902063734Z" level=info msg="Start snapshots syncer" Nov 24 00:23:05.902129 containerd[1577]: time="2025-11-24T00:23:05.902101224Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:23:05.902575 containerd[1577]: time="2025-11-24T00:23:05.902483261Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:23:05.902706 containerd[1577]: time="2025-11-24T00:23:05.902576295Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:23:05.902706 containerd[1577]: time="2025-11-24T00:23:05.902631589Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:23:05.902849 containerd[1577]: time="2025-11-24T00:23:05.902801928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:23:05.902849 containerd[1577]: time="2025-11-24T00:23:05.902843316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:23:05.902901 containerd[1577]: time="2025-11-24T00:23:05.902854026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:23:05.902901 containerd[1577]: time="2025-11-24T00:23:05.902863634Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:23:05.902901 containerd[1577]: time="2025-11-24T00:23:05.902873753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:23:05.902901 containerd[1577]: time="2025-11-24T00:23:05.902889683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:23:05.902901 containerd[1577]: time="2025-11-24T00:23:05.902900573Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:23:05.903001 containerd[1577]: time="2025-11-24T00:23:05.902936320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:23:05.903001 containerd[1577]: time="2025-11-24T00:23:05.902947902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:23:05.903001 containerd[1577]: time="2025-11-24T00:23:05.902957500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:23:05.903001 containerd[1577]: time="2025-11-24T00:23:05.902999098Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:23:05.903082 containerd[1577]: time="2025-11-24T00:23:05.903010650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:23:05.903082 containerd[1577]: time="2025-11-24T00:23:05.903019917Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:23:05.903082 containerd[1577]: time="2025-11-24T00:23:05.903028473Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903035717Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903107060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903124914Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903140784Z" level=info msg="runtime interface created" Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903145983Z" level=info msg="created NRI interface" Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903154159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903165049Z" level=info msg="Connect containerd service" Nov 24 00:23:05.903211 containerd[1577]: time="2025-11-24T00:23:05.903185207Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:23:05.904425 containerd[1577]: time="2025-11-24T00:23:05.904377303Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:23:05.934114 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:23:05.950939 kernel: EDAC MC: Ver: 3.0.0 Nov 24 00:23:05.976707 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:23:05.977004 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:23:05.986246 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:23:06.000505 containerd[1577]: time="2025-11-24T00:23:06.000453019Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:23:06.000590 containerd[1577]: time="2025-11-24T00:23:06.000528601Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:23:06.000590 containerd[1577]: time="2025-11-24T00:23:06.000566442Z" level=info msg="Start subscribing containerd event" Nov 24 00:23:06.000659 containerd[1577]: time="2025-11-24T00:23:06.000593883Z" level=info msg="Start recovering state" Nov 24 00:23:06.000710 containerd[1577]: time="2025-11-24T00:23:06.000689873Z" level=info msg="Start event monitor" Nov 24 00:23:06.000737 containerd[1577]: time="2025-11-24T00:23:06.000710702Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:23:06.000737 containerd[1577]: time="2025-11-24T00:23:06.000721893Z" level=info msg="Start streaming server" Nov 24 00:23:06.000737 containerd[1577]: time="2025-11-24T00:23:06.000736440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:23:06.000790 containerd[1577]: time="2025-11-24T00:23:06.000744225Z" level=info msg="runtime interface starting up..." Nov 24 00:23:06.000790 containerd[1577]: time="2025-11-24T00:23:06.000753583Z" level=info msg="starting plugins..." Nov 24 00:23:06.000790 containerd[1577]: time="2025-11-24T00:23:06.000769733Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:23:06.000943 containerd[1577]: time="2025-11-24T00:23:06.000899596Z" level=info msg="containerd successfully booted in 0.121838s" Nov 24 00:23:06.003896 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:23:06.018869 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:23:06.023560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:23:06.027965 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:23:06.030840 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:23:06.032743 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:23:06.112184 tar[1547]: linux-amd64/README.md Nov 24 00:23:06.135626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:23:06.784113 systemd-networkd[1470]: eth0: Gained IPv6LL Nov 24 00:23:06.787698 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:23:06.790446 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:23:06.793797 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 24 00:23:06.796835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:06.810470 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:23:06.836892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:23:06.839476 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 24 00:23:06.839809 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 24 00:23:06.843126 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:23:07.536645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:07.539309 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:23:07.541624 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:23:07.543018 systemd[1]: Startup finished in 3.233s (kernel) + 9.332s (initrd) + 4.446s (userspace) = 17.012s. Nov 24 00:23:07.972615 kubelet[1676]: E1124 00:23:07.972549 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:23:07.976764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:23:07.976992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:23:07.977382 systemd[1]: kubelet.service: Consumed 999ms CPU time, 266.8M memory peak. Nov 24 00:23:08.144403 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:23:08.145740 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:35414.service - OpenSSH per-connection server daemon (10.0.0.1:35414). Nov 24 00:23:08.232022 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 35414 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:08.233665 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:08.240172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:23:08.241265 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:23:08.247565 systemd-logind[1539]: New session 1 of user core. Nov 24 00:23:08.263246 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:23:08.266725 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:23:08.280463 (systemd)[1694]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:23:08.282975 systemd-logind[1539]: New session c1 of user core. Nov 24 00:23:08.441691 systemd[1694]: Queued start job for default target default.target. Nov 24 00:23:08.465266 systemd[1694]: Created slice app.slice - User Application Slice. Nov 24 00:23:08.465292 systemd[1694]: Reached target paths.target - Paths. Nov 24 00:23:08.465334 systemd[1694]: Reached target timers.target - Timers. Nov 24 00:23:08.466912 systemd[1694]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:23:08.478544 systemd[1694]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:23:08.478673 systemd[1694]: Reached target sockets.target - Sockets. Nov 24 00:23:08.478715 systemd[1694]: Reached target basic.target - Basic System. Nov 24 00:23:08.478756 systemd[1694]: Reached target default.target - Main User Target. Nov 24 00:23:08.478792 systemd[1694]: Startup finished in 188ms. Nov 24 00:23:08.479266 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:23:08.481097 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:23:08.541838 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:35426.service - OpenSSH per-connection server daemon (10.0.0.1:35426). Nov 24 00:23:08.590437 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 35426 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:08.591724 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:08.596434 systemd-logind[1539]: New session 2 of user core. Nov 24 00:23:08.618066 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:23:08.670711 sshd[1708]: Connection closed by 10.0.0.1 port 35426 Nov 24 00:23:08.671056 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Nov 24 00:23:08.679619 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:35426.service: Deactivated successfully. Nov 24 00:23:08.681583 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:23:08.682323 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:23:08.685316 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:35430.service - OpenSSH per-connection server daemon (10.0.0.1:35430). Nov 24 00:23:08.685943 systemd-logind[1539]: Removed session 2. Nov 24 00:23:08.747196 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 35430 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:08.748681 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:08.753356 systemd-logind[1539]: New session 3 of user core. Nov 24 00:23:08.771050 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:23:08.819852 sshd[1717]: Connection closed by 10.0.0.1 port 35430 Nov 24 00:23:08.820337 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Nov 24 00:23:08.837684 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:35430.service: Deactivated successfully. Nov 24 00:23:08.840247 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:23:08.841121 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:23:08.844779 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:35446.service - OpenSSH per-connection server daemon (10.0.0.1:35446). Nov 24 00:23:08.845530 systemd-logind[1539]: Removed session 3. Nov 24 00:23:08.888822 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 35446 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:08.890083 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:08.894781 systemd-logind[1539]: New session 4 of user core. Nov 24 00:23:08.906064 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:23:08.960134 sshd[1726]: Connection closed by 10.0.0.1 port 35446 Nov 24 00:23:08.960468 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Nov 24 00:23:08.969726 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:35446.service: Deactivated successfully. Nov 24 00:23:08.971745 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:23:08.972591 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:23:08.975421 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:35460.service - OpenSSH per-connection server daemon (10.0.0.1:35460). Nov 24 00:23:08.976228 systemd-logind[1539]: Removed session 4. Nov 24 00:23:09.041000 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 35460 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:09.042394 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:09.046790 systemd-logind[1539]: New session 5 of user core. Nov 24 00:23:09.060071 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:23:09.118763 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:23:09.119180 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:23:09.136638 sudo[1736]: pam_unix(sudo:session): session closed for user root Nov 24 00:23:09.138464 sshd[1735]: Connection closed by 10.0.0.1 port 35460 Nov 24 00:23:09.138847 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Nov 24 00:23:09.164291 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:35460.service: Deactivated successfully. Nov 24 00:23:09.166458 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:23:09.167357 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:23:09.170509 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Nov 24 00:23:09.171272 systemd-logind[1539]: Removed session 5. Nov 24 00:23:09.229294 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:09.230566 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:09.235171 systemd-logind[1539]: New session 6 of user core. Nov 24 00:23:09.245046 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:23:09.298710 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:23:09.299034 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:23:09.307874 sudo[1747]: pam_unix(sudo:session): session closed for user root Nov 24 00:23:09.314513 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:23:09.314867 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:23:09.325996 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:23:09.373691 augenrules[1769]: No rules Nov 24 00:23:09.375396 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:23:09.375694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:23:09.376820 sudo[1746]: pam_unix(sudo:session): session closed for user root Nov 24 00:23:09.378278 sshd[1745]: Connection closed by 10.0.0.1 port 35472 Nov 24 00:23:09.378653 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Nov 24 00:23:09.392070 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:35472.service: Deactivated successfully. Nov 24 00:23:09.394495 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:23:09.395519 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:23:09.398721 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:35488.service - OpenSSH per-connection server daemon (10.0.0.1:35488). Nov 24 00:23:09.399529 systemd-logind[1539]: Removed session 6. Nov 24 00:23:09.455892 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 35488 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:23:09.457343 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:23:09.462287 systemd-logind[1539]: New session 7 of user core. Nov 24 00:23:09.472242 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:23:09.525750 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:23:09.526069 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:23:09.834288 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:23:09.858319 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:23:10.086420 dockerd[1803]: time="2025-11-24T00:23:10.086264985Z" level=info msg="Starting up" Nov 24 00:23:10.087112 dockerd[1803]: time="2025-11-24T00:23:10.087080544Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:23:10.107698 dockerd[1803]: time="2025-11-24T00:23:10.107656582Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:23:11.422114 dockerd[1803]: time="2025-11-24T00:23:11.422050560Z" level=info msg="Loading containers: start." Nov 24 00:23:11.432972 kernel: Initializing XFRM netlink socket Nov 24 00:23:11.715723 systemd-networkd[1470]: docker0: Link UP Nov 24 00:23:11.722752 dockerd[1803]: time="2025-11-24T00:23:11.722692771Z" level=info msg="Loading containers: done." Nov 24 00:23:11.736872 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1662294878-merged.mount: Deactivated successfully. Nov 24 00:23:11.737559 dockerd[1803]: time="2025-11-24T00:23:11.737525428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:23:11.737627 dockerd[1803]: time="2025-11-24T00:23:11.737606510Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:23:11.737707 dockerd[1803]: time="2025-11-24T00:23:11.737691409Z" level=info msg="Initializing buildkit" Nov 24 00:23:11.767262 dockerd[1803]: time="2025-11-24T00:23:11.767217821Z" level=info msg="Completed buildkit initialization" Nov 24 00:23:11.773061 dockerd[1803]: time="2025-11-24T00:23:11.773018850Z" level=info msg="Daemon has completed initialization" Nov 24 00:23:11.773243 dockerd[1803]: time="2025-11-24T00:23:11.773177207Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:23:11.773279 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:23:12.492940 containerd[1577]: time="2025-11-24T00:23:12.492872072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:23:13.187729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693176604.mount: Deactivated successfully. Nov 24 00:23:15.655426 containerd[1577]: time="2025-11-24T00:23:15.655350237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:15.656214 containerd[1577]: time="2025-11-24T00:23:15.656148414Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 00:23:15.657584 containerd[1577]: time="2025-11-24T00:23:15.657515658Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:15.660360 containerd[1577]: time="2025-11-24T00:23:15.660311241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:15.661359 containerd[1577]: time="2025-11-24T00:23:15.661318801Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 3.168407395s" Nov 24 00:23:15.661359 containerd[1577]: time="2025-11-24T00:23:15.661359096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:23:15.662059 containerd[1577]: time="2025-11-24T00:23:15.662008314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:23:18.038313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:23:18.039967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:18.251887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:18.267232 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:23:18.397347 kubelet[2093]: E1124 00:23:18.397188 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:23:18.404499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:23:18.404739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:23:18.405255 systemd[1]: kubelet.service: Consumed 236ms CPU time, 109.8M memory peak. Nov 24 00:23:18.548279 containerd[1577]: time="2025-11-24T00:23:18.548189053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:18.549040 containerd[1577]: time="2025-11-24T00:23:18.548990947Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 00:23:18.550604 containerd[1577]: time="2025-11-24T00:23:18.550547406Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:18.553635 containerd[1577]: time="2025-11-24T00:23:18.553607695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:18.554824 containerd[1577]: time="2025-11-24T00:23:18.554781927Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 2.892732597s" Nov 24 00:23:18.554824 containerd[1577]: time="2025-11-24T00:23:18.554809298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:23:18.555364 containerd[1577]: time="2025-11-24T00:23:18.555321940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:23:19.906630 containerd[1577]: time="2025-11-24T00:23:19.906558460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:19.907460 containerd[1577]: time="2025-11-24T00:23:19.907400660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 00:23:19.909797 containerd[1577]: time="2025-11-24T00:23:19.909745477Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:19.912342 containerd[1577]: time="2025-11-24T00:23:19.912307542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:19.913180 containerd[1577]: time="2025-11-24T00:23:19.913152386Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.357796393s" Nov 24 00:23:19.913180 containerd[1577]: time="2025-11-24T00:23:19.913178636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:23:19.913642 containerd[1577]: time="2025-11-24T00:23:19.913618060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:23:22.074867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399717985.mount: Deactivated successfully. Nov 24 00:23:23.304186 containerd[1577]: time="2025-11-24T00:23:23.304098491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:23.328008 containerd[1577]: time="2025-11-24T00:23:23.327958757Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 00:23:23.336147 containerd[1577]: time="2025-11-24T00:23:23.336090257Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:23.355705 containerd[1577]: time="2025-11-24T00:23:23.355644287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:23.356240 containerd[1577]: time="2025-11-24T00:23:23.356197505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 3.442546483s" Nov 24 00:23:23.356276 containerd[1577]: time="2025-11-24T00:23:23.356241938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:23:23.356734 containerd[1577]: time="2025-11-24T00:23:23.356706660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:23:25.040721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517450104.mount: Deactivated successfully. Nov 24 00:23:26.178496 containerd[1577]: time="2025-11-24T00:23:26.178404480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:26.179331 containerd[1577]: time="2025-11-24T00:23:26.179289670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 00:23:26.180981 containerd[1577]: time="2025-11-24T00:23:26.180953801Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:26.183797 containerd[1577]: time="2025-11-24T00:23:26.183725639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:26.184997 containerd[1577]: time="2025-11-24T00:23:26.184955666Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.828215123s" Nov 24 00:23:26.185045 containerd[1577]: time="2025-11-24T00:23:26.184995140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:23:26.185443 containerd[1577]: time="2025-11-24T00:23:26.185410439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:23:27.384905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207150105.mount: Deactivated successfully. Nov 24 00:23:28.205347 containerd[1577]: time="2025-11-24T00:23:28.205251071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:23:28.337582 containerd[1577]: time="2025-11-24T00:23:28.337471762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:23:28.391285 containerd[1577]: time="2025-11-24T00:23:28.391203869Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:23:28.538311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:23:28.540046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:28.594719 containerd[1577]: time="2025-11-24T00:23:28.594627130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:23:28.595551 containerd[1577]: time="2025-11-24T00:23:28.595517320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.41007476s" Nov 24 00:23:28.595617 containerd[1577]: time="2025-11-24T00:23:28.595551434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:23:28.596118 containerd[1577]: time="2025-11-24T00:23:28.596072351Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:23:28.765283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:28.781334 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:23:28.847424 kubelet[2178]: E1124 00:23:28.847238 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:23:28.852905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:23:28.853153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:23:28.853612 systemd[1]: kubelet.service: Consumed 247ms CPU time, 111.1M memory peak. Nov 24 00:23:31.186572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471508257.mount: Deactivated successfully. Nov 24 00:23:33.161656 containerd[1577]: time="2025-11-24T00:23:33.161567574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:33.162587 containerd[1577]: time="2025-11-24T00:23:33.162506044Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 00:23:33.164029 containerd[1577]: time="2025-11-24T00:23:33.163984176Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:33.167217 containerd[1577]: time="2025-11-24T00:23:33.167152978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:33.168442 containerd[1577]: time="2025-11-24T00:23:33.168394607Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.572232067s" Nov 24 00:23:33.168442 containerd[1577]: time="2025-11-24T00:23:33.168432418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:23:36.880225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:36.880415 systemd[1]: kubelet.service: Consumed 247ms CPU time, 111.1M memory peak. Nov 24 00:23:36.883266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:36.911559 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Nov 24 00:23:36.911591 systemd[1]: Reloading... Nov 24 00:23:37.025962 zram_generator::config[2323]: No configuration found. Nov 24 00:23:37.345390 systemd[1]: Reloading finished in 433 ms. Nov 24 00:23:37.417803 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:23:37.417946 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:23:37.418346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:37.418404 systemd[1]: kubelet.service: Consumed 175ms CPU time, 98.2M memory peak. Nov 24 00:23:37.420147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:37.595050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:37.600231 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:23:37.644286 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:23:37.644286 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:23:37.644286 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:23:37.644857 kubelet[2362]: I1124 00:23:37.644324 2362 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:23:38.117143 kubelet[2362]: I1124 00:23:38.117076 2362 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:23:38.117143 kubelet[2362]: I1124 00:23:38.117116 2362 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:23:38.117381 kubelet[2362]: I1124 00:23:38.117358 2362 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:23:38.146078 kubelet[2362]: E1124 00:23:38.145996 2362 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:23:38.149966 kubelet[2362]: I1124 00:23:38.149716 2362 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:23:38.157678 kubelet[2362]: I1124 00:23:38.157638 2362 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:23:38.686638 kubelet[2362]: I1124 00:23:38.686565 2362 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:23:38.687380 kubelet[2362]: I1124 00:23:38.687013 2362 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:23:38.687380 kubelet[2362]: I1124 00:23:38.687057 2362 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:23:38.687380 kubelet[2362]: I1124 00:23:38.687258 2362 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:23:38.687380 kubelet[2362]: I1124 00:23:38.687272 2362 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:23:38.688581 kubelet[2362]: I1124 00:23:38.688543 2362 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:23:38.692226 kubelet[2362]: I1124 00:23:38.692177 2362 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:23:38.692226 kubelet[2362]: I1124 00:23:38.692201 2362 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:23:38.692226 kubelet[2362]: I1124 00:23:38.692225 2362 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:23:38.692226 kubelet[2362]: I1124 00:23:38.692240 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:23:38.698213 kubelet[2362]: I1124 00:23:38.698157 2362 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:23:38.698995 kubelet[2362]: I1124 00:23:38.698841 2362 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:23:38.699352 kubelet[2362]: E1124 00:23:38.699304 2362 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:23:38.699708 kubelet[2362]: E1124 00:23:38.699676 2362 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:23:38.700523 kubelet[2362]: W1124 00:23:38.700491 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:23:38.708684 kubelet[2362]: I1124 00:23:38.708634 2362 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:23:38.708840 kubelet[2362]: I1124 00:23:38.708816 2362 server.go:1289] "Started kubelet" Nov 24 00:23:38.710464 kubelet[2362]: I1124 00:23:38.710386 2362 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:23:38.710944 kubelet[2362]: I1124 00:23:38.710871 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:23:38.711369 kubelet[2362]: I1124 00:23:38.711336 2362 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:23:38.715525 kubelet[2362]: I1124 00:23:38.715503 2362 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:23:38.715735 kubelet[2362]: I1124 00:23:38.715703 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:23:38.716113 kubelet[2362]: I1124 00:23:38.716081 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:23:38.716952 kubelet[2362]: E1124 00:23:38.716891 2362 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 00:23:38.717309 kubelet[2362]: I1124 00:23:38.717284 2362 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:23:38.717467 kubelet[2362]: I1124 00:23:38.717443 2362 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:23:38.717467 kubelet[2362]: E1124 00:23:38.715844 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac98eade0888f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 00:23:38.708682895 +0000 UTC m=+1.104197686,LastTimestamp:2025-11-24 00:23:38.708682895 +0000 UTC m=+1.104197686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 00:23:38.717749 kubelet[2362]: I1124 00:23:38.717735 2362 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:23:38.718211 kubelet[2362]: E1124 00:23:38.718188 2362 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:23:38.718989 kubelet[2362]: E1124 00:23:38.718970 2362 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:23:38.719459 kubelet[2362]: E1124 00:23:38.719397 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Nov 24 00:23:38.719995 kubelet[2362]: I1124 00:23:38.719973 2362 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:23:38.720064 kubelet[2362]: I1124 00:23:38.720048 2362 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:23:38.720867 kubelet[2362]: I1124 00:23:38.720850 2362 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:23:38.724834 kubelet[2362]: I1124 00:23:38.724779 2362 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:23:38.734033 kubelet[2362]: I1124 00:23:38.734000 2362 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:23:38.734033 kubelet[2362]: I1124 00:23:38.734013 2362 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:23:38.734033 kubelet[2362]: I1124 00:23:38.734031 2362 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:23:38.738260 kubelet[2362]: I1124 00:23:38.738228 2362 policy_none.go:49] "None policy: Start" Nov 24 00:23:38.738260 kubelet[2362]: I1124 00:23:38.738245 2362 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:23:38.738260 kubelet[2362]: I1124 00:23:38.738255 2362 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:23:38.745124 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:23:38.747387 kubelet[2362]: I1124 00:23:38.747354 2362 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:23:38.747479 kubelet[2362]: I1124 00:23:38.747443 2362 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:23:38.747519 kubelet[2362]: I1124 00:23:38.747501 2362 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:23:38.747552 kubelet[2362]: I1124 00:23:38.747523 2362 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:23:38.747634 kubelet[2362]: E1124 00:23:38.747595 2362 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:23:38.748863 kubelet[2362]: E1124 00:23:38.748799 2362 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:23:38.757204 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:23:38.760576 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:23:38.775890 kubelet[2362]: E1124 00:23:38.775855 2362 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:23:38.776163 kubelet[2362]: I1124 00:23:38.776130 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:23:38.776243 kubelet[2362]: I1124 00:23:38.776156 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:23:38.776647 kubelet[2362]: I1124 00:23:38.776388 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:23:38.777114 kubelet[2362]: E1124 00:23:38.777084 2362 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:23:38.777186 kubelet[2362]: E1124 00:23:38.777154 2362 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 24 00:23:38.860793 systemd[1]: Created slice kubepods-burstable-pod3f3787fdfabc4721a99ce1ff4612ffda.slice - libcontainer container kubepods-burstable-pod3f3787fdfabc4721a99ce1ff4612ffda.slice. Nov 24 00:23:38.877075 kubelet[2362]: E1124 00:23:38.876782 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:38.877395 kubelet[2362]: I1124 00:23:38.877375 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:23:38.877882 kubelet[2362]: E1124 00:23:38.877841 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Nov 24 00:23:38.881218 systemd[1]: Created slice kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice - libcontainer container kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice. Nov 24 00:23:38.883505 kubelet[2362]: E1124 00:23:38.883465 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:38.885100 systemd[1]: Created slice kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice - libcontainer container kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice. Nov 24 00:23:38.886959 kubelet[2362]: E1124 00:23:38.886910 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:38.920621 kubelet[2362]: E1124 00:23:38.920571 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Nov 24 00:23:39.019506 kubelet[2362]: I1124 00:23:39.019312 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:39.019506 kubelet[2362]: I1124 00:23:39.019372 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:39.019506 kubelet[2362]: I1124 00:23:39.019400 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:39.019506 kubelet[2362]: I1124 00:23:39.019420 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:39.019506 kubelet[2362]: I1124 00:23:39.019440 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:39.020818 kubelet[2362]: I1124 00:23:39.019473 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:39.020818 kubelet[2362]: I1124 00:23:39.019492 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:39.020818 kubelet[2362]: I1124 00:23:39.019524 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:39.020818 kubelet[2362]: I1124 00:23:39.019558 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:39.080065 kubelet[2362]: I1124 00:23:39.080030 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:23:39.080541 kubelet[2362]: E1124 00:23:39.080489 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Nov 24 00:23:39.178016 kubelet[2362]: E1124 00:23:39.177962 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.178644 containerd[1577]: time="2025-11-24T00:23:39.178609880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f3787fdfabc4721a99ce1ff4612ffda,Namespace:kube-system,Attempt:0,}" Nov 24 00:23:39.184909 kubelet[2362]: E1124 00:23:39.184875 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.187751 containerd[1577]: time="2025-11-24T00:23:39.187682698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,}" Nov 24 00:23:39.187869 kubelet[2362]: E1124 00:23:39.187808 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.188411 containerd[1577]: time="2025-11-24T00:23:39.188379547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,}" Nov 24 00:23:39.210305 containerd[1577]: time="2025-11-24T00:23:39.210260202Z" level=info msg="connecting to shim 6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133" address="unix:///run/containerd/s/131d255ad6a746738b304a6941f272053b2727864ceb2267ad4d7b978fbd8e60" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:23:39.232630 containerd[1577]: time="2025-11-24T00:23:39.232377641Z" level=info msg="connecting to shim 117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c" address="unix:///run/containerd/s/004c573cb087f5bf24d59366a2b531bade267ed7c344bf3439190719c5935d30" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:23:39.234288 containerd[1577]: time="2025-11-24T00:23:39.234247565Z" level=info msg="connecting to shim 1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a" address="unix:///run/containerd/s/94922fa92134c07339afe14006182f60fa671600f05dcfab7ec87ce65a52df31" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:23:39.257070 systemd[1]: Started cri-containerd-6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133.scope - libcontainer container 6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133. Nov 24 00:23:39.261665 systemd[1]: Started cri-containerd-117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c.scope - libcontainer container 117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c. Nov 24 00:23:39.263260 systemd[1]: Started cri-containerd-1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a.scope - libcontainer container 1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a. Nov 24 00:23:39.317516 containerd[1577]: time="2025-11-24T00:23:39.317453095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f3787fdfabc4721a99ce1ff4612ffda,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133\"" Nov 24 00:23:39.320398 kubelet[2362]: E1124 00:23:39.320365 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.321323 kubelet[2362]: E1124 00:23:39.321246 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Nov 24 00:23:39.328051 containerd[1577]: time="2025-11-24T00:23:39.327998814Z" level=info msg="CreateContainer within sandbox \"6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:23:39.328991 containerd[1577]: time="2025-11-24T00:23:39.328961104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a\"" Nov 24 00:23:39.329622 kubelet[2362]: E1124 00:23:39.329598 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.330999 containerd[1577]: time="2025-11-24T00:23:39.330891374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,} returns sandbox id \"117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c\"" Nov 24 00:23:39.331727 kubelet[2362]: E1124 00:23:39.331705 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.334535 containerd[1577]: time="2025-11-24T00:23:39.334496513Z" level=info msg="CreateContainer within sandbox \"1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:23:39.337653 containerd[1577]: time="2025-11-24T00:23:39.337619776Z" level=info msg="CreateContainer within sandbox \"117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:23:39.345962 containerd[1577]: time="2025-11-24T00:23:39.345558424Z" level=info msg="Container c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:23:39.348421 containerd[1577]: time="2025-11-24T00:23:39.348374847Z" level=info msg="Container 55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:23:39.355931 containerd[1577]: time="2025-11-24T00:23:39.355881014Z" level=info msg="CreateContainer within sandbox \"6e68e139caffc5679f8c463a282ca6da8b5f6c7ccb74ff918bc3bb669b1a7133\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56\"" Nov 24 00:23:39.356725 containerd[1577]: time="2025-11-24T00:23:39.356693135Z" level=info msg="StartContainer for \"c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56\"" Nov 24 00:23:39.358446 containerd[1577]: time="2025-11-24T00:23:39.358398994Z" level=info msg="connecting to shim c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56" address="unix:///run/containerd/s/131d255ad6a746738b304a6941f272053b2727864ceb2267ad4d7b978fbd8e60" protocol=ttrpc version=3 Nov 24 00:23:39.359001 containerd[1577]: time="2025-11-24T00:23:39.358977656Z" level=info msg="Container 78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:23:39.360714 containerd[1577]: time="2025-11-24T00:23:39.360669428Z" level=info msg="CreateContainer within sandbox \"1bd45c87b0a2185e96ecee867dd8a3047a022bd60ce2b855475da14091b9a18a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d\"" Nov 24 00:23:39.361224 containerd[1577]: time="2025-11-24T00:23:39.361172264Z" level=info msg="StartContainer for \"55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d\"" Nov 24 00:23:39.362513 containerd[1577]: time="2025-11-24T00:23:39.362485057Z" level=info msg="connecting to shim 55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d" address="unix:///run/containerd/s/94922fa92134c07339afe14006182f60fa671600f05dcfab7ec87ce65a52df31" protocol=ttrpc version=3 Nov 24 00:23:39.366584 containerd[1577]: time="2025-11-24T00:23:39.366530994Z" level=info msg="CreateContainer within sandbox \"117c804dce65675d53be5a4ff5162ec0c5107e197cc21862902201e61accd40c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c\"" Nov 24 00:23:39.367301 containerd[1577]: time="2025-11-24T00:23:39.367230648Z" level=info msg="StartContainer for \"78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c\"" Nov 24 00:23:39.368745 containerd[1577]: time="2025-11-24T00:23:39.368715292Z" level=info msg="connecting to shim 78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c" address="unix:///run/containerd/s/004c573cb087f5bf24d59366a2b531bade267ed7c344bf3439190719c5935d30" protocol=ttrpc version=3 Nov 24 00:23:39.381213 systemd[1]: Started cri-containerd-c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56.scope - libcontainer container c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56. Nov 24 00:23:39.385647 systemd[1]: Started cri-containerd-55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d.scope - libcontainer container 55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d. Nov 24 00:23:39.405080 systemd[1]: Started cri-containerd-78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c.scope - libcontainer container 78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c. Nov 24 00:23:39.456224 containerd[1577]: time="2025-11-24T00:23:39.456097931Z" level=info msg="StartContainer for \"c70ed67bec3f59c65264a9da7eebe864506618583dfb8bf2de40f7d4b11f2d56\" returns successfully" Nov 24 00:23:39.465747 containerd[1577]: time="2025-11-24T00:23:39.465674437Z" level=info msg="StartContainer for \"55822aa8639464607357df9398ba34f3dc2b1e32964cf3a67bad0100bdde934d\" returns successfully" Nov 24 00:23:39.482547 kubelet[2362]: I1124 00:23:39.482502 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:23:39.482851 kubelet[2362]: E1124 00:23:39.482807 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Nov 24 00:23:39.491204 containerd[1577]: time="2025-11-24T00:23:39.491173997Z" level=info msg="StartContainer for \"78bfb74cdbea1e2ea8a0bfc1ed0d760b245d22995f45aaab867aa50f7f49c17c\" returns successfully" Nov 24 00:23:39.757528 kubelet[2362]: E1124 00:23:39.757374 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:39.757979 kubelet[2362]: E1124 00:23:39.757716 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.758731 kubelet[2362]: E1124 00:23:39.758704 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:39.758823 kubelet[2362]: E1124 00:23:39.758790 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:39.762827 kubelet[2362]: E1124 00:23:39.762801 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:39.762904 kubelet[2362]: E1124 00:23:39.762883 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:40.285087 kubelet[2362]: I1124 00:23:40.285037 2362 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:23:40.738193 kubelet[2362]: E1124 00:23:40.738135 2362 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 24 00:23:40.763759 kubelet[2362]: E1124 00:23:40.763546 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:40.763759 kubelet[2362]: E1124 00:23:40.763694 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:40.764208 kubelet[2362]: E1124 00:23:40.764184 2362 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 00:23:40.764330 kubelet[2362]: E1124 00:23:40.764313 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:41.144951 kubelet[2362]: I1124 00:23:41.144759 2362 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 00:23:41.144951 kubelet[2362]: E1124 00:23:41.144803 2362 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 24 00:23:41.219354 kubelet[2362]: I1124 00:23:41.219285 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:41.513353 kubelet[2362]: E1124 00:23:41.513143 2362 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.187ac98eade0888f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 00:23:38.708682895 +0000 UTC m=+1.104197686,LastTimestamp:2025-11-24 00:23:38.708682895 +0000 UTC m=+1.104197686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 00:23:41.513801 kubelet[2362]: E1124 00:23:41.513512 2362 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:41.513801 kubelet[2362]: I1124 00:23:41.513547 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:41.514907 kubelet[2362]: E1124 00:23:41.514880 2362 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:41.514907 kubelet[2362]: I1124 00:23:41.514900 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:41.515868 kubelet[2362]: E1124 00:23:41.515842 2362 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:41.698796 kubelet[2362]: I1124 00:23:41.698751 2362 apiserver.go:52] "Watching apiserver" Nov 24 00:23:41.718485 kubelet[2362]: I1124 00:23:41.718465 2362 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:23:41.763989 kubelet[2362]: I1124 00:23:41.763852 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:41.765840 kubelet[2362]: E1124 00:23:41.765812 2362 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:41.766027 kubelet[2362]: E1124 00:23:41.766003 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:44.723192 kubelet[2362]: I1124 00:23:44.723134 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:44.902404 kubelet[2362]: E1124 00:23:44.902346 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:45.769564 kubelet[2362]: E1124 00:23:45.769517 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:47.056782 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... Nov 24 00:23:47.056807 systemd[1]: Reloading... Nov 24 00:23:47.157964 zram_generator::config[2692]: No configuration found. Nov 24 00:23:47.357380 kubelet[2362]: I1124 00:23:47.357216 2362 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:47.363040 kubelet[2362]: E1124 00:23:47.363005 2362 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:47.396531 systemd[1]: Reloading finished in 339 ms. Nov 24 00:23:47.430961 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:47.450975 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:23:47.451504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:47.451604 systemd[1]: kubelet.service: Consumed 1.187s CPU time, 135.3M memory peak. Nov 24 00:23:47.454519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:23:47.697504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:23:47.714478 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:23:47.757262 kubelet[2737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:23:47.757262 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:23:47.757262 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:23:47.757746 kubelet[2737]: I1124 00:23:47.757293 2737 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:23:47.764943 kubelet[2737]: I1124 00:23:47.763591 2737 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:23:47.764943 kubelet[2737]: I1124 00:23:47.763621 2737 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:23:47.764943 kubelet[2737]: I1124 00:23:47.764301 2737 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:23:47.766284 kubelet[2737]: I1124 00:23:47.766255 2737 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:23:47.768579 kubelet[2737]: I1124 00:23:47.768476 2737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:23:47.772337 kubelet[2737]: I1124 00:23:47.772309 2737 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:23:47.778700 kubelet[2737]: I1124 00:23:47.778679 2737 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:23:47.779030 kubelet[2737]: I1124 00:23:47.778988 2737 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:23:47.779146 kubelet[2737]: I1124 00:23:47.779012 2737 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:23:47.779251 kubelet[2737]: I1124 00:23:47.779150 2737 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:23:47.779251 kubelet[2737]: I1124 00:23:47.779159 2737 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:23:47.779251 kubelet[2737]: I1124 00:23:47.779214 2737 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:23:47.779407 kubelet[2737]: I1124 00:23:47.779384 2737 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:23:47.779451 kubelet[2737]: I1124 00:23:47.779413 2737 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:23:47.779451 kubelet[2737]: I1124 00:23:47.779446 2737 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:23:47.779508 kubelet[2737]: I1124 00:23:47.779465 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:23:47.782357 kubelet[2737]: I1124 00:23:47.782298 2737 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:23:47.783473 kubelet[2737]: I1124 00:23:47.783452 2737 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:23:47.791433 kubelet[2737]: I1124 00:23:47.791400 2737 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:23:47.791530 kubelet[2737]: I1124 00:23:47.791460 2737 server.go:1289] "Started kubelet" Nov 24 00:23:47.791897 kubelet[2737]: I1124 00:23:47.791848 2737 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:23:47.792099 kubelet[2737]: I1124 00:23:47.792040 2737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:23:47.792501 kubelet[2737]: I1124 00:23:47.792478 2737 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:23:47.792846 kubelet[2737]: I1124 00:23:47.792820 2737 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:23:47.798479 kubelet[2737]: I1124 00:23:47.798441 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:23:47.798642 kubelet[2737]: E1124 00:23:47.798438 2737 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:23:47.798748 kubelet[2737]: I1124 00:23:47.798727 2737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:23:47.799245 kubelet[2737]: I1124 00:23:47.799086 2737 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:23:47.799245 kubelet[2737]: I1124 00:23:47.799175 2737 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:23:47.799365 kubelet[2737]: I1124 00:23:47.799316 2737 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:23:47.801901 kubelet[2737]: I1124 00:23:47.801874 2737 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:23:47.802041 kubelet[2737]: I1124 00:23:47.801991 2737 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:23:47.805305 kubelet[2737]: I1124 00:23:47.805280 2737 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:23:47.816291 kubelet[2737]: I1124 00:23:47.816228 2737 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:23:47.817519 kubelet[2737]: I1124 00:23:47.817489 2737 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:23:47.817519 kubelet[2737]: I1124 00:23:47.817510 2737 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:23:47.817613 kubelet[2737]: I1124 00:23:47.817531 2737 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:23:47.817613 kubelet[2737]: I1124 00:23:47.817539 2737 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:23:47.817613 kubelet[2737]: E1124 00:23:47.817599 2737 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:23:47.847381 kubelet[2737]: I1124 00:23:47.847350 2737 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:23:47.847381 kubelet[2737]: I1124 00:23:47.847368 2737 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:23:47.847381 kubelet[2737]: I1124 00:23:47.847388 2737 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847522 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847533 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847563 2737 policy_none.go:49] "None policy: Start" Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847574 2737 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847585 2737 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:23:47.847684 kubelet[2737]: I1124 00:23:47.847675 2737 state_mem.go:75] "Updated machine memory state" Nov 24 00:23:47.851573 kubelet[2737]: E1124 00:23:47.851531 2737 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:23:47.851760 kubelet[2737]: I1124 00:23:47.851738 2737 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:23:47.851810 kubelet[2737]: I1124 00:23:47.851751 2737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:23:47.851956 kubelet[2737]: I1124 00:23:47.851933 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:23:47.853116 kubelet[2737]: E1124 00:23:47.853085 2737 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:23:47.919180 kubelet[2737]: I1124 00:23:47.919118 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:47.919398 kubelet[2737]: I1124 00:23:47.919376 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:47.919460 kubelet[2737]: I1124 00:23:47.919252 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:47.925015 kubelet[2737]: E1124 00:23:47.924965 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:47.925394 kubelet[2737]: E1124 00:23:47.925354 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:47.962699 kubelet[2737]: I1124 00:23:47.962587 2737 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 00:23:47.969605 kubelet[2737]: I1124 00:23:47.969576 2737 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 24 00:23:47.969680 kubelet[2737]: I1124 00:23:47.969651 2737 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 00:23:48.000595 kubelet[2737]: I1124 00:23:48.000397 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.000595 kubelet[2737]: I1124 00:23:48.000453 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.000595 kubelet[2737]: I1124 00:23:48.000479 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.000595 kubelet[2737]: I1124 00:23:48.000506 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:48.000595 kubelet[2737]: I1124 00:23:48.000526 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:48.000909 kubelet[2737]: I1124 00:23:48.000558 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.000909 kubelet[2737]: I1124 00:23:48.000581 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:48.000909 kubelet[2737]: I1124 00:23:48.000601 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f3787fdfabc4721a99ce1ff4612ffda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f3787fdfabc4721a99ce1ff4612ffda\") " pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:48.000909 kubelet[2737]: I1124 00:23:48.000619 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.225489 kubelet[2737]: E1124 00:23:48.225254 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:48.225489 kubelet[2737]: E1124 00:23:48.225341 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:48.225489 kubelet[2737]: E1124 00:23:48.225472 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:48.781176 kubelet[2737]: I1124 00:23:48.781104 2737 apiserver.go:52] "Watching apiserver" Nov 24 00:23:48.800053 kubelet[2737]: I1124 00:23:48.799994 2737 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:23:48.831213 kubelet[2737]: I1124 00:23:48.831182 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:48.831360 kubelet[2737]: I1124 00:23:48.831338 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:48.831440 kubelet[2737]: I1124 00:23:48.831420 2737 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:49.193325 kubelet[2737]: E1124 00:23:49.193265 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 00:23:49.193506 kubelet[2737]: E1124 00:23:49.193443 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:49.193821 kubelet[2737]: E1124 00:23:49.193801 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 00:23:49.193907 kubelet[2737]: E1124 00:23:49.193888 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:49.194581 kubelet[2737]: E1124 00:23:49.194513 2737 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 24 00:23:49.194700 kubelet[2737]: E1124 00:23:49.194653 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:49.301977 kubelet[2737]: I1124 00:23:49.301858 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.301837298 podStartE2EDuration="2.301837298s" podCreationTimestamp="2025-11-24 00:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:23:49.282751879 +0000 UTC m=+1.559791648" watchObservedRunningTime="2025-11-24 00:23:49.301837298 +0000 UTC m=+1.578877057" Nov 24 00:23:49.313613 kubelet[2737]: I1124 00:23:49.313517 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.313475698 podStartE2EDuration="2.313475698s" podCreationTimestamp="2025-11-24 00:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:23:49.302473236 +0000 UTC m=+1.579513005" watchObservedRunningTime="2025-11-24 00:23:49.313475698 +0000 UTC m=+1.590515458" Nov 24 00:23:49.326825 kubelet[2737]: I1124 00:23:49.326736 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.326717105 podStartE2EDuration="5.326717105s" podCreationTimestamp="2025-11-24 00:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:23:49.313707229 +0000 UTC m=+1.590746978" watchObservedRunningTime="2025-11-24 00:23:49.326717105 +0000 UTC m=+1.603756854" Nov 24 00:23:49.832722 kubelet[2737]: E1124 00:23:49.832633 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:49.832722 kubelet[2737]: E1124 00:23:49.832672 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:49.833320 kubelet[2737]: E1124 00:23:49.833279 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:50.533810 update_engine[1540]: I20251124 00:23:50.533681 1540 update_attempter.cc:509] Updating boot flags... Nov 24 00:23:50.833991 kubelet[2737]: E1124 00:23:50.833954 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:50.934994 kubelet[2737]: E1124 00:23:50.934905 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:51.836212 kubelet[2737]: E1124 00:23:51.835517 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:51.836997 kubelet[2737]: E1124 00:23:51.836768 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:52.800037 kubelet[2737]: I1124 00:23:52.799996 2737 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:23:52.800402 containerd[1577]: time="2025-11-24T00:23:52.800337919Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:23:52.801176 kubelet[2737]: I1124 00:23:52.800597 2737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:23:52.836585 kubelet[2737]: E1124 00:23:52.836517 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:53.476975 systemd[1]: Created slice kubepods-besteffort-podb2550983_24b7_4c5c_a2da_bc9731f3542b.slice - libcontainer container kubepods-besteffort-podb2550983_24b7_4c5c_a2da_bc9731f3542b.slice. Nov 24 00:23:53.533886 kubelet[2737]: I1124 00:23:53.533829 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2550983-24b7-4c5c-a2da-bc9731f3542b-kube-proxy\") pod \"kube-proxy-t2nrf\" (UID: \"b2550983-24b7-4c5c-a2da-bc9731f3542b\") " pod="kube-system/kube-proxy-t2nrf" Nov 24 00:23:53.533886 kubelet[2737]: I1124 00:23:53.533875 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2550983-24b7-4c5c-a2da-bc9731f3542b-xtables-lock\") pod \"kube-proxy-t2nrf\" (UID: \"b2550983-24b7-4c5c-a2da-bc9731f3542b\") " pod="kube-system/kube-proxy-t2nrf" Nov 24 00:23:53.534131 kubelet[2737]: I1124 00:23:53.533982 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2550983-24b7-4c5c-a2da-bc9731f3542b-lib-modules\") pod \"kube-proxy-t2nrf\" (UID: \"b2550983-24b7-4c5c-a2da-bc9731f3542b\") " pod="kube-system/kube-proxy-t2nrf" Nov 24 00:23:53.534131 kubelet[2737]: I1124 00:23:53.534038 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc24r\" (UniqueName: \"kubernetes.io/projected/b2550983-24b7-4c5c-a2da-bc9731f3542b-kube-api-access-mc24r\") pod \"kube-proxy-t2nrf\" (UID: \"b2550983-24b7-4c5c-a2da-bc9731f3542b\") " pod="kube-system/kube-proxy-t2nrf" Nov 24 00:23:53.793425 kubelet[2737]: E1124 00:23:53.793372 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:53.794458 containerd[1577]: time="2025-11-24T00:23:53.794081634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2nrf,Uid:b2550983-24b7-4c5c-a2da-bc9731f3542b,Namespace:kube-system,Attempt:0,}" Nov 24 00:23:53.815670 containerd[1577]: time="2025-11-24T00:23:53.815617138Z" level=info msg="connecting to shim cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71" address="unix:///run/containerd/s/18c110be2d11318a304a4d8fce2abfbf3b9befafc8c80b05d4b809c85cecd538" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:23:53.861156 systemd[1]: Started cri-containerd-cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71.scope - libcontainer container cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71. Nov 24 00:23:53.887810 containerd[1577]: time="2025-11-24T00:23:53.887768591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2nrf,Uid:b2550983-24b7-4c5c-a2da-bc9731f3542b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71\"" Nov 24 00:23:53.890423 kubelet[2737]: E1124 00:23:53.890383 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:53.895896 containerd[1577]: time="2025-11-24T00:23:53.895853436Z" level=info msg="CreateContainer within sandbox \"cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:23:53.908468 containerd[1577]: time="2025-11-24T00:23:53.908435349Z" level=info msg="Container 227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:23:53.917657 containerd[1577]: time="2025-11-24T00:23:53.917614076Z" level=info msg="CreateContainer within sandbox \"cfd9d1ed24ce7f2fb3be1f0ae650bdf9b97557072a9e3ce9354cacab910b1a71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3\"" Nov 24 00:23:53.919857 containerd[1577]: time="2025-11-24T00:23:53.918176180Z" level=info msg="StartContainer for \"227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3\"" Nov 24 00:23:53.919857 containerd[1577]: time="2025-11-24T00:23:53.919785599Z" level=info msg="connecting to shim 227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3" address="unix:///run/containerd/s/18c110be2d11318a304a4d8fce2abfbf3b9befafc8c80b05d4b809c85cecd538" protocol=ttrpc version=3 Nov 24 00:23:53.943069 systemd[1]: Started cri-containerd-227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3.scope - libcontainer container 227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3. Nov 24 00:23:53.997359 systemd[1]: Created slice kubepods-besteffort-podac7f2a02_2d8f_4b3e_a018_016a0afb1f09.slice - libcontainer container kubepods-besteffort-podac7f2a02_2d8f_4b3e_a018_016a0afb1f09.slice. Nov 24 00:23:54.022748 containerd[1577]: time="2025-11-24T00:23:54.022688507Z" level=info msg="StartContainer for \"227854f04114d2db3ea2d04fc472535574dcbc2ba55d546d74644093f83f52c3\" returns successfully" Nov 24 00:23:54.038169 kubelet[2737]: I1124 00:23:54.038119 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac7f2a02-2d8f-4b3e-a018-016a0afb1f09-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4b46l\" (UID: \"ac7f2a02-2d8f-4b3e-a018-016a0afb1f09\") " pod="tigera-operator/tigera-operator-7dcd859c48-4b46l" Nov 24 00:23:54.038169 kubelet[2737]: I1124 00:23:54.038162 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjk5v\" (UniqueName: \"kubernetes.io/projected/ac7f2a02-2d8f-4b3e-a018-016a0afb1f09-kube-api-access-zjk5v\") pod \"tigera-operator-7dcd859c48-4b46l\" (UID: \"ac7f2a02-2d8f-4b3e-a018-016a0afb1f09\") " pod="tigera-operator/tigera-operator-7dcd859c48-4b46l" Nov 24 00:23:54.301229 containerd[1577]: time="2025-11-24T00:23:54.301181024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4b46l,Uid:ac7f2a02-2d8f-4b3e-a018-016a0afb1f09,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:23:54.321136 containerd[1577]: time="2025-11-24T00:23:54.321088871Z" level=info msg="connecting to shim e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028" address="unix:///run/containerd/s/4706e8cc59a1b4742c4a0f9f8b22127b22ea5b462e2a4377e88ad68e82894722" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:23:54.348117 systemd[1]: Started cri-containerd-e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028.scope - libcontainer container e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028. Nov 24 00:23:54.402558 containerd[1577]: time="2025-11-24T00:23:54.402490931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4b46l,Uid:ac7f2a02-2d8f-4b3e-a018-016a0afb1f09,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028\"" Nov 24 00:23:54.404543 containerd[1577]: time="2025-11-24T00:23:54.404502999Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:23:54.842398 kubelet[2737]: E1124 00:23:54.842356 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:55.486296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302006153.mount: Deactivated successfully. Nov 24 00:23:55.702854 kubelet[2737]: E1124 00:23:55.702792 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:55.719513 kubelet[2737]: I1124 00:23:55.719433 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2nrf" podStartSLOduration=2.719410819 podStartE2EDuration="2.719410819s" podCreationTimestamp="2025-11-24 00:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:23:54.852410701 +0000 UTC m=+7.129450460" watchObservedRunningTime="2025-11-24 00:23:55.719410819 +0000 UTC m=+7.996450598" Nov 24 00:23:55.843789 kubelet[2737]: E1124 00:23:55.843748 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:23:56.758221 containerd[1577]: time="2025-11-24T00:23:56.758161544Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:56.758941 containerd[1577]: time="2025-11-24T00:23:56.758903807Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:23:56.760055 containerd[1577]: time="2025-11-24T00:23:56.760034315Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:56.762210 containerd[1577]: time="2025-11-24T00:23:56.762183919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:23:56.762851 containerd[1577]: time="2025-11-24T00:23:56.762761772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.358107465s" Nov 24 00:23:56.762851 containerd[1577]: time="2025-11-24T00:23:56.762809182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:23:56.767484 containerd[1577]: time="2025-11-24T00:23:56.767436591Z" level=info msg="CreateContainer within sandbox \"e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:23:56.775883 containerd[1577]: time="2025-11-24T00:23:56.775843127Z" level=info msg="Container 8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:23:56.782153 containerd[1577]: time="2025-11-24T00:23:56.782100911Z" level=info msg="CreateContainer within sandbox \"e21f73b88254f54e2e1ea2b666657b82d2fc1b21d64d5503489e373ac156a028\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e\"" Nov 24 00:23:56.782632 containerd[1577]: time="2025-11-24T00:23:56.782595607Z" level=info msg="StartContainer for \"8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e\"" Nov 24 00:23:56.783447 containerd[1577]: time="2025-11-24T00:23:56.783418492Z" level=info msg="connecting to shim 8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e" address="unix:///run/containerd/s/4706e8cc59a1b4742c4a0f9f8b22127b22ea5b462e2a4377e88ad68e82894722" protocol=ttrpc version=3 Nov 24 00:23:56.831068 systemd[1]: Started cri-containerd-8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e.scope - libcontainer container 8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e. Nov 24 00:23:56.866310 containerd[1577]: time="2025-11-24T00:23:56.866265440Z" level=info msg="StartContainer for \"8947ee35b67cb6f0770d45f6ab48a7410fdd655c3fd094efda3e5374e039987e\" returns successfully" Nov 24 00:24:02.592289 sudo[1782]: pam_unix(sudo:session): session closed for user root Nov 24 00:24:02.593772 sshd[1781]: Connection closed by 10.0.0.1 port 35488 Nov 24 00:24:02.594352 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:02.600556 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:35488.service: Deactivated successfully. Nov 24 00:24:02.609015 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:24:02.609324 systemd[1]: session-7.scope: Consumed 5.837s CPU time, 224.5M memory peak. Nov 24 00:24:02.611912 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:24:02.613957 systemd-logind[1539]: Removed session 7. Nov 24 00:24:06.986683 kubelet[2737]: I1124 00:24:06.986601 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4b46l" podStartSLOduration=11.627045949 podStartE2EDuration="13.986540359s" podCreationTimestamp="2025-11-24 00:23:53 +0000 UTC" firstStartedPulling="2025-11-24 00:23:54.404108352 +0000 UTC m=+6.681148111" lastFinishedPulling="2025-11-24 00:23:56.763602762 +0000 UTC m=+9.040642521" observedRunningTime="2025-11-24 00:23:57.86068085 +0000 UTC m=+10.137720609" watchObservedRunningTime="2025-11-24 00:24:06.986540359 +0000 UTC m=+19.263580109" Nov 24 00:24:07.063252 systemd[1]: Created slice kubepods-besteffort-pod874e6e4c_2d58_4a5e_a40c_dff189a4fd82.slice - libcontainer container kubepods-besteffort-pod874e6e4c_2d58_4a5e_a40c_dff189a4fd82.slice. Nov 24 00:24:07.122541 kubelet[2737]: I1124 00:24:07.122489 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/874e6e4c-2d58-4a5e-a40c-dff189a4fd82-typha-certs\") pod \"calico-typha-94564fcb9-4cmsw\" (UID: \"874e6e4c-2d58-4a5e-a40c-dff189a4fd82\") " pod="calico-system/calico-typha-94564fcb9-4cmsw" Nov 24 00:24:07.122541 kubelet[2737]: I1124 00:24:07.122532 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/874e6e4c-2d58-4a5e-a40c-dff189a4fd82-tigera-ca-bundle\") pod \"calico-typha-94564fcb9-4cmsw\" (UID: \"874e6e4c-2d58-4a5e-a40c-dff189a4fd82\") " pod="calico-system/calico-typha-94564fcb9-4cmsw" Nov 24 00:24:07.122541 kubelet[2737]: I1124 00:24:07.122557 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78d2\" (UniqueName: \"kubernetes.io/projected/874e6e4c-2d58-4a5e-a40c-dff189a4fd82-kube-api-access-f78d2\") pod \"calico-typha-94564fcb9-4cmsw\" (UID: \"874e6e4c-2d58-4a5e-a40c-dff189a4fd82\") " pod="calico-system/calico-typha-94564fcb9-4cmsw" Nov 24 00:24:07.158887 systemd[1]: Created slice kubepods-besteffort-pod4d56c4f5_849f_4494_93e4_3a2a92624348.slice - libcontainer container kubepods-besteffort-pod4d56c4f5_849f_4494_93e4_3a2a92624348.slice. Nov 24 00:24:07.224704 kubelet[2737]: I1124 00:24:07.223513 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-cni-bin-dir\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224704 kubelet[2737]: I1124 00:24:07.223558 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-var-run-calico\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224704 kubelet[2737]: I1124 00:24:07.223590 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-xtables-lock\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224704 kubelet[2737]: I1124 00:24:07.223606 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25zt\" (UniqueName: \"kubernetes.io/projected/4d56c4f5-849f-4494-93e4-3a2a92624348-kube-api-access-v25zt\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224704 kubelet[2737]: I1124 00:24:07.223628 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-cni-net-dir\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224965 kubelet[2737]: I1124 00:24:07.223653 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-cni-log-dir\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224965 kubelet[2737]: I1124 00:24:07.223674 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-flexvol-driver-host\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224965 kubelet[2737]: I1124 00:24:07.223688 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-policysync\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224965 kubelet[2737]: I1124 00:24:07.223704 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-lib-modules\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.224965 kubelet[2737]: I1124 00:24:07.223726 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4d56c4f5-849f-4494-93e4-3a2a92624348-node-certs\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.225102 kubelet[2737]: I1124 00:24:07.223739 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d56c4f5-849f-4494-93e4-3a2a92624348-tigera-ca-bundle\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.225102 kubelet[2737]: I1124 00:24:07.223753 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d56c4f5-849f-4494-93e4-3a2a92624348-var-lib-calico\") pod \"calico-node-mtstl\" (UID: \"4d56c4f5-849f-4494-93e4-3a2a92624348\") " pod="calico-system/calico-node-mtstl" Nov 24 00:24:07.327857 kubelet[2737]: E1124 00:24:07.327803 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.327857 kubelet[2737]: W1124 00:24:07.327830 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.327857 kubelet[2737]: E1124 00:24:07.327853 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.328894 kubelet[2737]: E1124 00:24:07.328774 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.328894 kubelet[2737]: W1124 00:24:07.328787 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.328894 kubelet[2737]: E1124 00:24:07.328800 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.334475 kubelet[2737]: E1124 00:24:07.334447 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.334475 kubelet[2737]: W1124 00:24:07.334461 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.334475 kubelet[2737]: E1124 00:24:07.334471 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.372472 kubelet[2737]: E1124 00:24:07.372208 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:07.373578 containerd[1577]: time="2025-11-24T00:24:07.373529192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94564fcb9-4cmsw,Uid:874e6e4c-2d58-4a5e-a40c-dff189a4fd82,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:07.400199 kubelet[2737]: E1124 00:24:07.400156 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:07.406114 kubelet[2737]: E1124 00:24:07.406086 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.406114 kubelet[2737]: W1124 00:24:07.406109 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.406245 kubelet[2737]: E1124 00:24:07.406130 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.406410 kubelet[2737]: E1124 00:24:07.406396 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.406410 kubelet[2737]: W1124 00:24:07.406407 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.406500 kubelet[2737]: E1124 00:24:07.406417 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.406584 kubelet[2737]: E1124 00:24:07.406572 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.406584 kubelet[2737]: W1124 00:24:07.406582 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.406584 kubelet[2737]: E1124 00:24:07.406590 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.406864 kubelet[2737]: E1124 00:24:07.406851 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.406864 kubelet[2737]: W1124 00:24:07.406861 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.406937 kubelet[2737]: E1124 00:24:07.406872 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.407104 kubelet[2737]: E1124 00:24:07.407091 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.407104 kubelet[2737]: W1124 00:24:07.407101 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.407187 kubelet[2737]: E1124 00:24:07.407109 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.407313 kubelet[2737]: E1124 00:24:07.407278 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.407313 kubelet[2737]: W1124 00:24:07.407289 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.407313 kubelet[2737]: E1124 00:24:07.407297 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.407484 kubelet[2737]: E1124 00:24:07.407469 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.407484 kubelet[2737]: W1124 00:24:07.407481 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.407555 kubelet[2737]: E1124 00:24:07.407489 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.407679 kubelet[2737]: E1124 00:24:07.407650 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.407679 kubelet[2737]: W1124 00:24:07.407663 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.407679 kubelet[2737]: E1124 00:24:07.407671 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.407841 kubelet[2737]: E1124 00:24:07.407827 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.407841 kubelet[2737]: W1124 00:24:07.407839 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.407938 kubelet[2737]: E1124 00:24:07.407847 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.408024 kubelet[2737]: E1124 00:24:07.408011 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.408024 kubelet[2737]: W1124 00:24:07.408021 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.408092 kubelet[2737]: E1124 00:24:07.408031 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.408207 kubelet[2737]: E1124 00:24:07.408191 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.408207 kubelet[2737]: W1124 00:24:07.408201 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.408258 kubelet[2737]: E1124 00:24:07.408209 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.408381 kubelet[2737]: E1124 00:24:07.408368 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.408381 kubelet[2737]: W1124 00:24:07.408378 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.408452 kubelet[2737]: E1124 00:24:07.408387 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.408659 kubelet[2737]: E1124 00:24:07.408604 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.408659 kubelet[2737]: W1124 00:24:07.408646 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.408725 kubelet[2737]: E1124 00:24:07.408657 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.408905 kubelet[2737]: E1124 00:24:07.408891 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.408905 kubelet[2737]: W1124 00:24:07.408901 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.408905 kubelet[2737]: E1124 00:24:07.408909 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.409216 kubelet[2737]: E1124 00:24:07.409184 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.409216 kubelet[2737]: W1124 00:24:07.409211 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.409285 kubelet[2737]: E1124 00:24:07.409241 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.409528 kubelet[2737]: E1124 00:24:07.409509 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.409528 kubelet[2737]: W1124 00:24:07.409523 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.409606 kubelet[2737]: E1124 00:24:07.409534 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.409787 kubelet[2737]: E1124 00:24:07.409768 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.409787 kubelet[2737]: W1124 00:24:07.409782 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.409867 kubelet[2737]: E1124 00:24:07.409794 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.410047 kubelet[2737]: E1124 00:24:07.410033 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.410047 kubelet[2737]: W1124 00:24:07.410046 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.410113 kubelet[2737]: E1124 00:24:07.410057 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.410257 kubelet[2737]: E1124 00:24:07.410236 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.410257 kubelet[2737]: W1124 00:24:07.410247 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.410257 kubelet[2737]: E1124 00:24:07.410255 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.410441 kubelet[2737]: E1124 00:24:07.410430 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.410473 kubelet[2737]: W1124 00:24:07.410441 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.410473 kubelet[2737]: E1124 00:24:07.410449 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.420494 containerd[1577]: time="2025-11-24T00:24:07.420339216Z" level=info msg="connecting to shim 5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e" address="unix:///run/containerd/s/ee166c997f5f80c39464bb262f515ea6244e53a792026d35c5b11f9ea5dadbc2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:07.426453 kubelet[2737]: E1124 00:24:07.426355 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.426453 kubelet[2737]: W1124 00:24:07.426387 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.426453 kubelet[2737]: E1124 00:24:07.426412 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.426453 kubelet[2737]: I1124 00:24:07.426452 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/50e0737c-0da4-4ca3-bede-949a700e86ed-varrun\") pod \"csi-node-driver-hmjpm\" (UID: \"50e0737c-0da4-4ca3-bede-949a700e86ed\") " pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:07.426952 kubelet[2737]: E1124 00:24:07.426895 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.426952 kubelet[2737]: W1124 00:24:07.426938 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.427143 kubelet[2737]: E1124 00:24:07.426968 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.427143 kubelet[2737]: I1124 00:24:07.427028 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/50e0737c-0da4-4ca3-bede-949a700e86ed-socket-dir\") pod \"csi-node-driver-hmjpm\" (UID: \"50e0737c-0da4-4ca3-bede-949a700e86ed\") " pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:07.427465 kubelet[2737]: E1124 00:24:07.427417 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.427465 kubelet[2737]: W1124 00:24:07.427441 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.427465 kubelet[2737]: E1124 00:24:07.427452 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.427759 kubelet[2737]: E1124 00:24:07.427741 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.427759 kubelet[2737]: W1124 00:24:07.427755 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.427954 kubelet[2737]: E1124 00:24:07.427764 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.428025 kubelet[2737]: E1124 00:24:07.428012 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.428025 kubelet[2737]: W1124 00:24:07.428024 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.428184 kubelet[2737]: E1124 00:24:07.428032 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.428277 kubelet[2737]: E1124 00:24:07.428257 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.428277 kubelet[2737]: W1124 00:24:07.428270 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.428277 kubelet[2737]: E1124 00:24:07.428279 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.428624 kubelet[2737]: E1124 00:24:07.428579 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.428624 kubelet[2737]: W1124 00:24:07.428590 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.428624 kubelet[2737]: E1124 00:24:07.428598 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.428814 kubelet[2737]: I1124 00:24:07.428628 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50e0737c-0da4-4ca3-bede-949a700e86ed-kubelet-dir\") pod \"csi-node-driver-hmjpm\" (UID: \"50e0737c-0da4-4ca3-bede-949a700e86ed\") " pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:07.428975 kubelet[2737]: E1124 00:24:07.428884 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.428975 kubelet[2737]: W1124 00:24:07.428899 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.428975 kubelet[2737]: E1124 00:24:07.428955 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.428975 kubelet[2737]: I1124 00:24:07.428974 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/50e0737c-0da4-4ca3-bede-949a700e86ed-registration-dir\") pod \"csi-node-driver-hmjpm\" (UID: \"50e0737c-0da4-4ca3-bede-949a700e86ed\") " pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:07.429307 kubelet[2737]: E1124 00:24:07.429275 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.429307 kubelet[2737]: W1124 00:24:07.429289 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.429307 kubelet[2737]: E1124 00:24:07.429299 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.429626 kubelet[2737]: E1124 00:24:07.429540 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.429626 kubelet[2737]: W1124 00:24:07.429551 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.429626 kubelet[2737]: E1124 00:24:07.429560 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.429798 kubelet[2737]: E1124 00:24:07.429764 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.429798 kubelet[2737]: W1124 00:24:07.429775 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.429798 kubelet[2737]: E1124 00:24:07.429783 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.430113 kubelet[2737]: E1124 00:24:07.430081 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.430113 kubelet[2737]: W1124 00:24:07.430110 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.430277 kubelet[2737]: E1124 00:24:07.430137 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.430427 kubelet[2737]: E1124 00:24:07.430406 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.430427 kubelet[2737]: W1124 00:24:07.430424 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.430599 kubelet[2737]: E1124 00:24:07.430436 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.430599 kubelet[2737]: I1124 00:24:07.430482 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljg47\" (UniqueName: \"kubernetes.io/projected/50e0737c-0da4-4ca3-bede-949a700e86ed-kube-api-access-ljg47\") pod \"csi-node-driver-hmjpm\" (UID: \"50e0737c-0da4-4ca3-bede-949a700e86ed\") " pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:07.430751 kubelet[2737]: E1124 00:24:07.430725 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.430751 kubelet[2737]: W1124 00:24:07.430747 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.430800 kubelet[2737]: E1124 00:24:07.430763 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.431028 kubelet[2737]: E1124 00:24:07.430990 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.431028 kubelet[2737]: W1124 00:24:07.431005 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.431028 kubelet[2737]: E1124 00:24:07.431014 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.456229 systemd[1]: Started cri-containerd-5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e.scope - libcontainer container 5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e. Nov 24 00:24:07.463039 kubelet[2737]: E1124 00:24:07.463006 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:07.463761 containerd[1577]: time="2025-11-24T00:24:07.463729269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtstl,Uid:4d56c4f5-849f-4494-93e4-3a2a92624348,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:07.494205 containerd[1577]: time="2025-11-24T00:24:07.494151275Z" level=info msg="connecting to shim 28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771" address="unix:///run/containerd/s/727a91548e38ea55d0e11b5d697007ebe900657695d1d9e1c35552d95b425ce2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:07.532009 kubelet[2737]: E1124 00:24:07.531952 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.532009 kubelet[2737]: W1124 00:24:07.531977 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.532009 kubelet[2737]: E1124 00:24:07.531998 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.532269 kubelet[2737]: E1124 00:24:07.532233 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.532269 kubelet[2737]: W1124 00:24:07.532242 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.532269 kubelet[2737]: E1124 00:24:07.532250 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.532507 kubelet[2737]: E1124 00:24:07.532457 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.532507 kubelet[2737]: W1124 00:24:07.532466 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.532507 kubelet[2737]: E1124 00:24:07.532474 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.532677 kubelet[2737]: E1124 00:24:07.532653 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.532677 kubelet[2737]: W1124 00:24:07.532666 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.532677 kubelet[2737]: E1124 00:24:07.532674 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.532949 kubelet[2737]: E1124 00:24:07.532884 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.532949 kubelet[2737]: W1124 00:24:07.532898 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.532949 kubelet[2737]: E1124 00:24:07.532906 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.533170 kubelet[2737]: E1124 00:24:07.533140 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.533170 kubelet[2737]: W1124 00:24:07.533154 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.533170 kubelet[2737]: E1124 00:24:07.533162 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.533390 kubelet[2737]: E1124 00:24:07.533356 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.533390 kubelet[2737]: W1124 00:24:07.533374 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.533390 kubelet[2737]: E1124 00:24:07.533382 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.533589 kubelet[2737]: E1124 00:24:07.533567 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.533589 kubelet[2737]: W1124 00:24:07.533578 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.533589 kubelet[2737]: E1124 00:24:07.533587 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.533756 kubelet[2737]: E1124 00:24:07.533737 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.533756 kubelet[2737]: W1124 00:24:07.533749 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.533756 kubelet[2737]: E1124 00:24:07.533756 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.534099 kubelet[2737]: E1124 00:24:07.534067 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.534099 kubelet[2737]: W1124 00:24:07.534080 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.534099 kubelet[2737]: E1124 00:24:07.534089 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.534297 kubelet[2737]: E1124 00:24:07.534289 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.534320 kubelet[2737]: W1124 00:24:07.534297 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.534320 kubelet[2737]: E1124 00:24:07.534305 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.534521 systemd[1]: Started cri-containerd-28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771.scope - libcontainer container 28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771. Nov 24 00:24:07.534620 kubelet[2737]: E1124 00:24:07.534536 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.534620 kubelet[2737]: W1124 00:24:07.534544 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.534620 kubelet[2737]: E1124 00:24:07.534552 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.534822 kubelet[2737]: E1124 00:24:07.534808 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.534822 kubelet[2737]: W1124 00:24:07.534821 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.534886 kubelet[2737]: E1124 00:24:07.534830 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.535059 kubelet[2737]: E1124 00:24:07.535045 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.535059 kubelet[2737]: W1124 00:24:07.535055 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.535174 kubelet[2737]: E1124 00:24:07.535063 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.535257 kubelet[2737]: E1124 00:24:07.535243 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.535257 kubelet[2737]: W1124 00:24:07.535253 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.535333 kubelet[2737]: E1124 00:24:07.535261 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.535490 kubelet[2737]: E1124 00:24:07.535477 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.535490 kubelet[2737]: W1124 00:24:07.535487 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.535537 kubelet[2737]: E1124 00:24:07.535496 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.536025 kubelet[2737]: E1124 00:24:07.535977 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.536025 kubelet[2737]: W1124 00:24:07.535995 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.536025 kubelet[2737]: E1124 00:24:07.536004 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.536423 kubelet[2737]: E1124 00:24:07.536407 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.536423 kubelet[2737]: W1124 00:24:07.536419 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.536485 kubelet[2737]: E1124 00:24:07.536429 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.536907 kubelet[2737]: E1124 00:24:07.536892 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.536907 kubelet[2737]: W1124 00:24:07.536903 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.536986 kubelet[2737]: E1124 00:24:07.536912 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.537983 kubelet[2737]: E1124 00:24:07.537555 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.537983 kubelet[2737]: W1124 00:24:07.537567 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.537983 kubelet[2737]: E1124 00:24:07.537576 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.537983 kubelet[2737]: E1124 00:24:07.537904 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.537983 kubelet[2737]: W1124 00:24:07.537942 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.537983 kubelet[2737]: E1124 00:24:07.537953 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.538675 kubelet[2737]: E1124 00:24:07.538655 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.538675 kubelet[2737]: W1124 00:24:07.538668 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.538675 kubelet[2737]: E1124 00:24:07.538677 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.539433 kubelet[2737]: E1124 00:24:07.539125 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.539433 kubelet[2737]: W1124 00:24:07.539161 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.539433 kubelet[2737]: E1124 00:24:07.539196 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.539683 kubelet[2737]: E1124 00:24:07.539667 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.539817 kubelet[2737]: W1124 00:24:07.539743 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.539893 kubelet[2737]: E1124 00:24:07.539878 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.540289 kubelet[2737]: E1124 00:24:07.540272 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.540377 kubelet[2737]: W1124 00:24:07.540363 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.540445 kubelet[2737]: E1124 00:24:07.540431 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.552813 kubelet[2737]: E1124 00:24:07.552768 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:07.552813 kubelet[2737]: W1124 00:24:07.552789 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:07.552813 kubelet[2737]: E1124 00:24:07.552808 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:07.562898 containerd[1577]: time="2025-11-24T00:24:07.562794801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94564fcb9-4cmsw,Uid:874e6e4c-2d58-4a5e-a40c-dff189a4fd82,Namespace:calico-system,Attempt:0,} returns sandbox id \"5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e\"" Nov 24 00:24:07.563731 kubelet[2737]: E1124 00:24:07.563703 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:07.565646 containerd[1577]: time="2025-11-24T00:24:07.565549227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:24:07.571615 containerd[1577]: time="2025-11-24T00:24:07.571559615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtstl,Uid:4d56c4f5-849f-4494-93e4-3a2a92624348,Namespace:calico-system,Attempt:0,} returns sandbox id \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\"" Nov 24 00:24:07.572465 kubelet[2737]: E1124 00:24:07.572446 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:08.818295 kubelet[2737]: E1124 00:24:08.818205 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:08.973479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871502336.mount: Deactivated successfully. Nov 24 00:24:09.988751 containerd[1577]: time="2025-11-24T00:24:09.988700379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:09.989551 containerd[1577]: time="2025-11-24T00:24:09.989522616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:24:09.990845 containerd[1577]: time="2025-11-24T00:24:09.990810810Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:09.992806 containerd[1577]: time="2025-11-24T00:24:09.992783292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:09.993366 containerd[1577]: time="2025-11-24T00:24:09.993339269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.427746559s" Nov 24 00:24:09.993404 containerd[1577]: time="2025-11-24T00:24:09.993364937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:24:09.994545 containerd[1577]: time="2025-11-24T00:24:09.994525401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:24:10.008638 containerd[1577]: time="2025-11-24T00:24:10.008591551Z" level=info msg="CreateContainer within sandbox \"5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:24:10.019233 containerd[1577]: time="2025-11-24T00:24:10.019193711Z" level=info msg="Container 17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:10.022174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721271794.mount: Deactivated successfully. Nov 24 00:24:10.028550 containerd[1577]: time="2025-11-24T00:24:10.028513969Z" level=info msg="CreateContainer within sandbox \"5af6307ce9e6bc07d5983af80c60c42f8cd4b46a5f1f886fedd49c23c6527d8e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8\"" Nov 24 00:24:10.029017 containerd[1577]: time="2025-11-24T00:24:10.028995164Z" level=info msg="StartContainer for \"17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8\"" Nov 24 00:24:10.030072 containerd[1577]: time="2025-11-24T00:24:10.030037476Z" level=info msg="connecting to shim 17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8" address="unix:///run/containerd/s/ee166c997f5f80c39464bb262f515ea6244e53a792026d35c5b11f9ea5dadbc2" protocol=ttrpc version=3 Nov 24 00:24:10.055066 systemd[1]: Started cri-containerd-17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8.scope - libcontainer container 17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8. Nov 24 00:24:10.108826 containerd[1577]: time="2025-11-24T00:24:10.108785794Z" level=info msg="StartContainer for \"17c11997bc46c3e1f505f7289ad03e3d959e99a4a1544d475eeaa72ff9acacd8\" returns successfully" Nov 24 00:24:10.818256 kubelet[2737]: E1124 00:24:10.818189 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:10.881201 kubelet[2737]: E1124 00:24:10.881148 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:10.890418 kubelet[2737]: I1124 00:24:10.890363 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-94564fcb9-4cmsw" podStartSLOduration=2.461418365 podStartE2EDuration="4.890351899s" podCreationTimestamp="2025-11-24 00:24:06 +0000 UTC" firstStartedPulling="2025-11-24 00:24:07.565147742 +0000 UTC m=+19.842187501" lastFinishedPulling="2025-11-24 00:24:09.994081276 +0000 UTC m=+22.271121035" observedRunningTime="2025-11-24 00:24:10.889726763 +0000 UTC m=+23.166766522" watchObservedRunningTime="2025-11-24 00:24:10.890351899 +0000 UTC m=+23.167391658" Nov 24 00:24:10.933631 kubelet[2737]: E1124 00:24:10.933589 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.933631 kubelet[2737]: W1124 00:24:10.933613 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.933631 kubelet[2737]: E1124 00:24:10.933635 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.933848 kubelet[2737]: E1124 00:24:10.933832 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.933848 kubelet[2737]: W1124 00:24:10.933845 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.933902 kubelet[2737]: E1124 00:24:10.933853 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.934130 kubelet[2737]: E1124 00:24:10.934113 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.934130 kubelet[2737]: W1124 00:24:10.934125 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.934207 kubelet[2737]: E1124 00:24:10.934135 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.934394 kubelet[2737]: E1124 00:24:10.934375 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.934394 kubelet[2737]: W1124 00:24:10.934389 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.934474 kubelet[2737]: E1124 00:24:10.934401 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.934679 kubelet[2737]: E1124 00:24:10.934640 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.934679 kubelet[2737]: W1124 00:24:10.934660 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.934679 kubelet[2737]: E1124 00:24:10.934681 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.934939 kubelet[2737]: E1124 00:24:10.934899 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.934939 kubelet[2737]: W1124 00:24:10.934913 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.934939 kubelet[2737]: E1124 00:24:10.934942 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935142 kubelet[2737]: E1124 00:24:10.935123 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935142 kubelet[2737]: W1124 00:24:10.935134 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935142 kubelet[2737]: E1124 00:24:10.935141 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935321 kubelet[2737]: E1124 00:24:10.935305 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935355 kubelet[2737]: W1124 00:24:10.935326 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935355 kubelet[2737]: E1124 00:24:10.935335 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935516 kubelet[2737]: E1124 00:24:10.935491 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935516 kubelet[2737]: W1124 00:24:10.935501 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935516 kubelet[2737]: E1124 00:24:10.935508 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935662 kubelet[2737]: E1124 00:24:10.935645 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935662 kubelet[2737]: W1124 00:24:10.935655 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935662 kubelet[2737]: E1124 00:24:10.935661 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935812 kubelet[2737]: E1124 00:24:10.935796 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935812 kubelet[2737]: W1124 00:24:10.935805 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935812 kubelet[2737]: E1124 00:24:10.935813 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.935982 kubelet[2737]: E1124 00:24:10.935965 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.935982 kubelet[2737]: W1124 00:24:10.935975 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.935982 kubelet[2737]: E1124 00:24:10.935983 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.936141 kubelet[2737]: E1124 00:24:10.936124 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.936141 kubelet[2737]: W1124 00:24:10.936134 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.936141 kubelet[2737]: E1124 00:24:10.936141 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.936302 kubelet[2737]: E1124 00:24:10.936284 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.936302 kubelet[2737]: W1124 00:24:10.936295 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.936302 kubelet[2737]: E1124 00:24:10.936303 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.936466 kubelet[2737]: E1124 00:24:10.936449 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.936466 kubelet[2737]: W1124 00:24:10.936459 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.936466 kubelet[2737]: E1124 00:24:10.936467 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.955824 kubelet[2737]: E1124 00:24:10.955794 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.955824 kubelet[2737]: W1124 00:24:10.955807 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.955824 kubelet[2737]: E1124 00:24:10.955817 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.956096 kubelet[2737]: E1124 00:24:10.956076 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.956096 kubelet[2737]: W1124 00:24:10.956094 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.956196 kubelet[2737]: E1124 00:24:10.956107 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.956372 kubelet[2737]: E1124 00:24:10.956358 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.956372 kubelet[2737]: W1124 00:24:10.956367 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.956452 kubelet[2737]: E1124 00:24:10.956376 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.956616 kubelet[2737]: E1124 00:24:10.956595 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.956616 kubelet[2737]: W1124 00:24:10.956608 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.956616 kubelet[2737]: E1124 00:24:10.956618 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.956825 kubelet[2737]: E1124 00:24:10.956807 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.956825 kubelet[2737]: W1124 00:24:10.956817 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.956825 kubelet[2737]: E1124 00:24:10.956825 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.957047 kubelet[2737]: E1124 00:24:10.957026 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.957047 kubelet[2737]: W1124 00:24:10.957040 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.957288 kubelet[2737]: E1124 00:24:10.957052 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.957288 kubelet[2737]: E1124 00:24:10.957268 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.957288 kubelet[2737]: W1124 00:24:10.957276 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.957288 kubelet[2737]: E1124 00:24:10.957286 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.957575 kubelet[2737]: E1124 00:24:10.957556 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.957575 kubelet[2737]: W1124 00:24:10.957570 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.957643 kubelet[2737]: E1124 00:24:10.957580 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.957756 kubelet[2737]: E1124 00:24:10.957741 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.957756 kubelet[2737]: W1124 00:24:10.957751 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.957803 kubelet[2737]: E1124 00:24:10.957758 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.957963 kubelet[2737]: E1124 00:24:10.957946 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.957963 kubelet[2737]: W1124 00:24:10.957958 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.958022 kubelet[2737]: E1124 00:24:10.957966 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.958127 kubelet[2737]: E1124 00:24:10.958111 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.958127 kubelet[2737]: W1124 00:24:10.958122 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.958177 kubelet[2737]: E1124 00:24:10.958129 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.958361 kubelet[2737]: E1124 00:24:10.958328 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.958361 kubelet[2737]: W1124 00:24:10.958341 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.958361 kubelet[2737]: E1124 00:24:10.958352 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.958563 kubelet[2737]: E1124 00:24:10.958543 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.958563 kubelet[2737]: W1124 00:24:10.958556 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.958563 kubelet[2737]: E1124 00:24:10.958565 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.959011 kubelet[2737]: E1124 00:24:10.958982 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.959011 kubelet[2737]: W1124 00:24:10.959001 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.959011 kubelet[2737]: E1124 00:24:10.959012 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.959381 kubelet[2737]: E1124 00:24:10.959267 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.959381 kubelet[2737]: W1124 00:24:10.959281 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.959381 kubelet[2737]: E1124 00:24:10.959293 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.959526 kubelet[2737]: E1124 00:24:10.959507 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.959563 kubelet[2737]: W1124 00:24:10.959523 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.959563 kubelet[2737]: E1124 00:24:10.959543 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.959795 kubelet[2737]: E1124 00:24:10.959781 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.959795 kubelet[2737]: W1124 00:24:10.959793 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.959844 kubelet[2737]: E1124 00:24:10.959804 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:10.960175 kubelet[2737]: E1124 00:24:10.960162 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:24:10.960175 kubelet[2737]: W1124 00:24:10.960172 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:24:10.960242 kubelet[2737]: E1124 00:24:10.960180 2737 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:24:11.321940 containerd[1577]: time="2025-11-24T00:24:11.321874453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:11.322632 containerd[1577]: time="2025-11-24T00:24:11.322600789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:24:11.323696 containerd[1577]: time="2025-11-24T00:24:11.323663248Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:11.325554 containerd[1577]: time="2025-11-24T00:24:11.325523908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:11.326064 containerd[1577]: time="2025-11-24T00:24:11.326029269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.331478009s" Nov 24 00:24:11.326093 containerd[1577]: time="2025-11-24T00:24:11.326064746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:24:11.330179 containerd[1577]: time="2025-11-24T00:24:11.330152847Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:24:11.339014 containerd[1577]: time="2025-11-24T00:24:11.338980143Z" level=info msg="Container 96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:11.347994 containerd[1577]: time="2025-11-24T00:24:11.347960317Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515\"" Nov 24 00:24:11.348416 containerd[1577]: time="2025-11-24T00:24:11.348386089Z" level=info msg="StartContainer for \"96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515\"" Nov 24 00:24:11.349731 containerd[1577]: time="2025-11-24T00:24:11.349701012Z" level=info msg="connecting to shim 96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515" address="unix:///run/containerd/s/727a91548e38ea55d0e11b5d697007ebe900657695d1d9e1c35552d95b425ce2" protocol=ttrpc version=3 Nov 24 00:24:11.371105 systemd[1]: Started cri-containerd-96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515.scope - libcontainer container 96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515. Nov 24 00:24:11.458742 containerd[1577]: time="2025-11-24T00:24:11.458693126Z" level=info msg="StartContainer for \"96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515\" returns successfully" Nov 24 00:24:11.474062 systemd[1]: cri-containerd-96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515.scope: Deactivated successfully. Nov 24 00:24:11.474544 systemd[1]: cri-containerd-96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515.scope: Consumed 44ms CPU time, 6.2M memory peak, 4.6M written to disk. Nov 24 00:24:11.477592 containerd[1577]: time="2025-11-24T00:24:11.477554519Z" level=info msg="received container exit event container_id:\"96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515\" id:\"96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515\" pid:3455 exited_at:{seconds:1763943851 nanos:477144066}" Nov 24 00:24:11.500776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96856cffa30908b1e7cd2aa1f500a517520046bbb4df7baaa97a2d48e75e0515-rootfs.mount: Deactivated successfully. Nov 24 00:24:11.883200 kubelet[2737]: I1124 00:24:11.883152 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:24:11.883762 kubelet[2737]: E1124 00:24:11.883407 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:11.883762 kubelet[2737]: E1124 00:24:11.883540 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:12.818589 kubelet[2737]: E1124 00:24:12.818528 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:12.886055 kubelet[2737]: E1124 00:24:12.886013 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:12.886536 containerd[1577]: time="2025-11-24T00:24:12.886478429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:24:14.818949 kubelet[2737]: E1124 00:24:14.818459 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:15.544763 containerd[1577]: time="2025-11-24T00:24:15.544689323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:15.545452 containerd[1577]: time="2025-11-24T00:24:15.545419986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:24:15.546633 containerd[1577]: time="2025-11-24T00:24:15.546573295Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:15.548537 containerd[1577]: time="2025-11-24T00:24:15.548494386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:15.549006 containerd[1577]: time="2025-11-24T00:24:15.548973688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.662468628s" Nov 24 00:24:15.549006 containerd[1577]: time="2025-11-24T00:24:15.548999236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:24:15.553195 containerd[1577]: time="2025-11-24T00:24:15.553151491Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:24:15.562025 containerd[1577]: time="2025-11-24T00:24:15.561975289Z" level=info msg="Container 1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:15.571717 containerd[1577]: time="2025-11-24T00:24:15.571652131Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05\"" Nov 24 00:24:15.572275 containerd[1577]: time="2025-11-24T00:24:15.572233975Z" level=info msg="StartContainer for \"1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05\"" Nov 24 00:24:15.574036 containerd[1577]: time="2025-11-24T00:24:15.574008921Z" level=info msg="connecting to shim 1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05" address="unix:///run/containerd/s/727a91548e38ea55d0e11b5d697007ebe900657695d1d9e1c35552d95b425ce2" protocol=ttrpc version=3 Nov 24 00:24:15.595057 systemd[1]: Started cri-containerd-1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05.scope - libcontainer container 1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05. Nov 24 00:24:15.977390 containerd[1577]: time="2025-11-24T00:24:15.976994184Z" level=info msg="StartContainer for \"1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05\" returns successfully" Nov 24 00:24:16.818044 kubelet[2737]: E1124 00:24:16.817963 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:16.980201 kubelet[2737]: E1124 00:24:16.979912 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:17.025532 containerd[1577]: time="2025-11-24T00:24:17.025474199Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:24:17.028605 systemd[1]: cri-containerd-1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05.scope: Deactivated successfully. Nov 24 00:24:17.029069 systemd[1]: cri-containerd-1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05.scope: Consumed 616ms CPU time, 178.2M memory peak, 8K read from disk, 171.3M written to disk. Nov 24 00:24:17.030877 containerd[1577]: time="2025-11-24T00:24:17.030843780Z" level=info msg="received container exit event container_id:\"1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05\" id:\"1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05\" pid:3515 exited_at:{seconds:1763943857 nanos:30617094}" Nov 24 00:24:17.056457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1297b292ecffc292f607a3b5ae75d988fa1830003c208134c4d76a3fee4d2d05-rootfs.mount: Deactivated successfully. Nov 24 00:24:17.056870 kubelet[2737]: I1124 00:24:17.056797 2737 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:24:17.351085 systemd[1]: Created slice kubepods-besteffort-pod61ec7850_13cb_49a4_9e27_b7daa1c44ddf.slice - libcontainer container kubepods-besteffort-pod61ec7850_13cb_49a4_9e27_b7daa1c44ddf.slice. Nov 24 00:24:17.360590 systemd[1]: Created slice kubepods-besteffort-pod326ce385_d248_47a6_abc0_0a06e5c39a9a.slice - libcontainer container kubepods-besteffort-pod326ce385_d248_47a6_abc0_0a06e5c39a9a.slice. Nov 24 00:24:17.373001 systemd[1]: Created slice kubepods-besteffort-pod7060cdca_9b38_45cc_ad88_a15dcab99e92.slice - libcontainer container kubepods-besteffort-pod7060cdca_9b38_45cc_ad88_a15dcab99e92.slice. Nov 24 00:24:17.381651 systemd[1]: Created slice kubepods-besteffort-pod0db5d7d1_b0e4_4b2b_9a4d_5b198091ab3c.slice - libcontainer container kubepods-besteffort-pod0db5d7d1_b0e4_4b2b_9a4d_5b198091ab3c.slice. Nov 24 00:24:17.387039 systemd[1]: Created slice kubepods-besteffort-poddaf1acba_9d5d_4e4f_ade2_53de43ab5a20.slice - libcontainer container kubepods-besteffort-poddaf1acba_9d5d_4e4f_ade2_53de43ab5a20.slice. Nov 24 00:24:17.397126 systemd[1]: Created slice kubepods-burstable-pod4f15220c_2dc0_450b_9030_97efbcb4ef00.slice - libcontainer container kubepods-burstable-pod4f15220c_2dc0_450b_9030_97efbcb4ef00.slice. Nov 24 00:24:17.399126 kubelet[2737]: I1124 00:24:17.399078 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b085997e-b05a-40ec-affb-73b810925afb-config-volume\") pod \"coredns-674b8bbfcf-mhn4k\" (UID: \"b085997e-b05a-40ec-affb-73b810925afb\") " pod="kube-system/coredns-674b8bbfcf-mhn4k" Nov 24 00:24:17.399336 kubelet[2737]: I1124 00:24:17.399132 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c-config\") pod \"goldmane-666569f655-ds9ng\" (UID: \"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c\") " pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.399336 kubelet[2737]: I1124 00:24:17.399158 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj22n\" (UniqueName: \"kubernetes.io/projected/7060cdca-9b38-45cc-ad88-a15dcab99e92-kube-api-access-gj22n\") pod \"calico-kube-controllers-6b4597d695-rxcnh\" (UID: \"7060cdca-9b38-45cc-ad88-a15dcab99e92\") " pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" Nov 24 00:24:17.399336 kubelet[2737]: I1124 00:24:17.399181 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wj2q\" (UniqueName: \"kubernetes.io/projected/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-kube-api-access-7wj2q\") pod \"whisker-86454d8f74-mx7sc\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " pod="calico-system/whisker-86454d8f74-mx7sc" Nov 24 00:24:17.399336 kubelet[2737]: I1124 00:24:17.399201 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6mn\" (UniqueName: \"kubernetes.io/projected/4f15220c-2dc0-450b-9030-97efbcb4ef00-kube-api-access-4g6mn\") pod \"coredns-674b8bbfcf-8kx96\" (UID: \"4f15220c-2dc0-450b-9030-97efbcb4ef00\") " pod="kube-system/coredns-674b8bbfcf-8kx96" Nov 24 00:24:17.399336 kubelet[2737]: I1124 00:24:17.399227 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/daf1acba-9d5d-4e4f-ade2-53de43ab5a20-calico-apiserver-certs\") pod \"calico-apiserver-76968cf4d5-nq647\" (UID: \"daf1acba-9d5d-4e4f-ade2-53de43ab5a20\") " pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" Nov 24 00:24:17.399485 kubelet[2737]: I1124 00:24:17.399248 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-backend-key-pair\") pod \"whisker-86454d8f74-mx7sc\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " pod="calico-system/whisker-86454d8f74-mx7sc" Nov 24 00:24:17.399485 kubelet[2737]: I1124 00:24:17.399268 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f15220c-2dc0-450b-9030-97efbcb4ef00-config-volume\") pod \"coredns-674b8bbfcf-8kx96\" (UID: \"4f15220c-2dc0-450b-9030-97efbcb4ef00\") " pod="kube-system/coredns-674b8bbfcf-8kx96" Nov 24 00:24:17.399485 kubelet[2737]: I1124 00:24:17.399288 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjj4\" (UniqueName: \"kubernetes.io/projected/b085997e-b05a-40ec-affb-73b810925afb-kube-api-access-whjj4\") pod \"coredns-674b8bbfcf-mhn4k\" (UID: \"b085997e-b05a-40ec-affb-73b810925afb\") " pod="kube-system/coredns-674b8bbfcf-mhn4k" Nov 24 00:24:17.399485 kubelet[2737]: I1124 00:24:17.399311 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/326ce385-d248-47a6-abc0-0a06e5c39a9a-calico-apiserver-certs\") pod \"calico-apiserver-76968cf4d5-hffj2\" (UID: \"326ce385-d248-47a6-abc0-0a06e5c39a9a\") " pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" Nov 24 00:24:17.399485 kubelet[2737]: I1124 00:24:17.399333 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7060cdca-9b38-45cc-ad88-a15dcab99e92-tigera-ca-bundle\") pod \"calico-kube-controllers-6b4597d695-rxcnh\" (UID: \"7060cdca-9b38-45cc-ad88-a15dcab99e92\") " pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" Nov 24 00:24:17.399636 kubelet[2737]: I1124 00:24:17.399354 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c-goldmane-key-pair\") pod \"goldmane-666569f655-ds9ng\" (UID: \"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c\") " pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.399636 kubelet[2737]: I1124 00:24:17.399374 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2k7\" (UniqueName: \"kubernetes.io/projected/daf1acba-9d5d-4e4f-ade2-53de43ab5a20-kube-api-access-9w2k7\") pod \"calico-apiserver-76968cf4d5-nq647\" (UID: \"daf1acba-9d5d-4e4f-ade2-53de43ab5a20\") " pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" Nov 24 00:24:17.399636 kubelet[2737]: I1124 00:24:17.399393 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-ca-bundle\") pod \"whisker-86454d8f74-mx7sc\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " pod="calico-system/whisker-86454d8f74-mx7sc" Nov 24 00:24:17.399636 kubelet[2737]: I1124 00:24:17.399417 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfdvt\" (UniqueName: \"kubernetes.io/projected/0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c-kube-api-access-zfdvt\") pod \"goldmane-666569f655-ds9ng\" (UID: \"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c\") " pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.399636 kubelet[2737]: I1124 00:24:17.399438 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c-goldmane-ca-bundle\") pod \"goldmane-666569f655-ds9ng\" (UID: \"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c\") " pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.399778 kubelet[2737]: I1124 00:24:17.399457 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxvw\" (UniqueName: \"kubernetes.io/projected/326ce385-d248-47a6-abc0-0a06e5c39a9a-kube-api-access-llxvw\") pod \"calico-apiserver-76968cf4d5-hffj2\" (UID: \"326ce385-d248-47a6-abc0-0a06e5c39a9a\") " pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" Nov 24 00:24:17.406133 systemd[1]: Created slice kubepods-burstable-podb085997e_b05a_40ec_affb_73b810925afb.slice - libcontainer container kubepods-burstable-podb085997e_b05a_40ec_affb_73b810925afb.slice. Nov 24 00:24:17.654809 containerd[1577]: time="2025-11-24T00:24:17.654666925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86454d8f74-mx7sc,Uid:61ec7850-13cb-49a4-9e27-b7daa1c44ddf,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:17.668637 containerd[1577]: time="2025-11-24T00:24:17.668590904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-hffj2,Uid:326ce385-d248-47a6-abc0-0a06e5c39a9a,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:24:17.680522 containerd[1577]: time="2025-11-24T00:24:17.680469488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4597d695-rxcnh,Uid:7060cdca-9b38-45cc-ad88-a15dcab99e92,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:17.687010 containerd[1577]: time="2025-11-24T00:24:17.686289315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ds9ng,Uid:0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:17.696107 containerd[1577]: time="2025-11-24T00:24:17.696061251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-nq647,Uid:daf1acba-9d5d-4e4f-ade2-53de43ab5a20,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:24:17.705608 kubelet[2737]: E1124 00:24:17.705550 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:17.709805 kubelet[2737]: E1124 00:24:17.709412 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:17.710163 containerd[1577]: time="2025-11-24T00:24:17.710122988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhn4k,Uid:b085997e-b05a-40ec-affb-73b810925afb,Namespace:kube-system,Attempt:0,}" Nov 24 00:24:17.712788 containerd[1577]: time="2025-11-24T00:24:17.711700774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kx96,Uid:4f15220c-2dc0-450b-9030-97efbcb4ef00,Namespace:kube-system,Attempt:0,}" Nov 24 00:24:17.843174 containerd[1577]: time="2025-11-24T00:24:17.843084732Z" level=error msg="Failed to destroy network for sandbox \"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.866122 containerd[1577]: time="2025-11-24T00:24:17.866055053Z" level=error msg="Failed to destroy network for sandbox \"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.870805 containerd[1577]: time="2025-11-24T00:24:17.870735568Z" level=error msg="Failed to destroy network for sandbox \"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.890268 containerd[1577]: time="2025-11-24T00:24:17.890198958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86454d8f74-mx7sc,Uid:61ec7850-13cb-49a4-9e27-b7daa1c44ddf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.890635 containerd[1577]: time="2025-11-24T00:24:17.890592687Z" level=error msg="Failed to destroy network for sandbox \"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.894325 containerd[1577]: time="2025-11-24T00:24:17.890205090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-nq647,Uid:daf1acba-9d5d-4e4f-ade2-53de43ab5a20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.894455 containerd[1577]: time="2025-11-24T00:24:17.890219326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-hffj2,Uid:326ce385-d248-47a6-abc0-0a06e5c39a9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.894591 containerd[1577]: time="2025-11-24T00:24:17.894548843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4597d695-rxcnh,Uid:7060cdca-9b38-45cc-ad88-a15dcab99e92,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.900262 kubelet[2737]: E1124 00:24:17.900209 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.900262 kubelet[2737]: E1124 00:24:17.900224 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.900684 kubelet[2737]: E1124 00:24:17.900292 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" Nov 24 00:24:17.900684 kubelet[2737]: E1124 00:24:17.900302 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86454d8f74-mx7sc" Nov 24 00:24:17.900684 kubelet[2737]: E1124 00:24:17.900318 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" Nov 24 00:24:17.900684 kubelet[2737]: E1124 00:24:17.900328 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86454d8f74-mx7sc" Nov 24 00:24:17.900788 kubelet[2737]: E1124 00:24:17.900368 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b4597d695-rxcnh_calico-system(7060cdca-9b38-45cc-ad88-a15dcab99e92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b4597d695-rxcnh_calico-system(7060cdca-9b38-45cc-ad88-a15dcab99e92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a84617e06265c132a2db3ecd0aaa3b03b762765f01a5f5888c059eda1e6e23be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:17.900788 kubelet[2737]: E1124 00:24:17.900385 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86454d8f74-mx7sc_calico-system(61ec7850-13cb-49a4-9e27-b7daa1c44ddf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86454d8f74-mx7sc_calico-system(61ec7850-13cb-49a4-9e27-b7daa1c44ddf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab4525521ef6f305d94d9f6a60e13ae062a1ef36b0596fffc344d364cb3ab3d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86454d8f74-mx7sc" podUID="61ec7850-13cb-49a4-9e27-b7daa1c44ddf" Nov 24 00:24:17.900788 kubelet[2737]: E1124 00:24:17.900435 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.900960 kubelet[2737]: E1124 00:24:17.900458 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" Nov 24 00:24:17.900960 kubelet[2737]: E1124 00:24:17.900476 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" Nov 24 00:24:17.900960 kubelet[2737]: E1124 00:24:17.900514 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76968cf4d5-hffj2_calico-apiserver(326ce385-d248-47a6-abc0-0a06e5c39a9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76968cf4d5-hffj2_calico-apiserver(326ce385-d248-47a6-abc0-0a06e5c39a9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d1c3e96ee13f634d3621b0e42430f677bac2c4020721417575d4a3489958335\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:17.902128 kubelet[2737]: E1124 00:24:17.901038 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.902128 kubelet[2737]: E1124 00:24:17.901171 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" Nov 24 00:24:17.902128 kubelet[2737]: E1124 00:24:17.901202 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" Nov 24 00:24:17.902209 kubelet[2737]: E1124 00:24:17.901298 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76968cf4d5-nq647_calico-apiserver(daf1acba-9d5d-4e4f-ade2-53de43ab5a20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76968cf4d5-nq647_calico-apiserver(daf1acba-9d5d-4e4f-ade2-53de43ab5a20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b64e577e4f8d5379fa3dbc8a5d0e7989e39f0172dff2501ff1f7e9b262995006\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:17.906026 containerd[1577]: time="2025-11-24T00:24:17.905876041Z" level=error msg="Failed to destroy network for sandbox \"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.908435 containerd[1577]: time="2025-11-24T00:24:17.908371010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ds9ng,Uid:0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.908794 kubelet[2737]: E1124 00:24:17.908720 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.908870 kubelet[2737]: E1124 00:24:17.908852 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.908900 kubelet[2737]: E1124 00:24:17.908873 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ds9ng" Nov 24 00:24:17.908983 kubelet[2737]: E1124 00:24:17.908958 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ds9ng_calico-system(0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ds9ng_calico-system(0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9963138dcdbaf7023e335545b4f223e290e4547b6c18d07e5297964910010ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:24:17.916978 containerd[1577]: time="2025-11-24T00:24:17.916943861Z" level=error msg="Failed to destroy network for sandbox \"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.918406 containerd[1577]: time="2025-11-24T00:24:17.918380471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhn4k,Uid:b085997e-b05a-40ec-affb-73b810925afb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.918803 kubelet[2737]: E1124 00:24:17.918759 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.918803 kubelet[2737]: E1124 00:24:17.918815 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mhn4k" Nov 24 00:24:17.918987 kubelet[2737]: E1124 00:24:17.918835 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mhn4k" Nov 24 00:24:17.918987 kubelet[2737]: E1124 00:24:17.918894 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mhn4k_kube-system(b085997e-b05a-40ec-affb-73b810925afb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mhn4k_kube-system(b085997e-b05a-40ec-affb-73b810925afb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2000d1a76ec910418cee91b6b9d255fbf8cd7e1b453cc7b939af229d2355d2a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mhn4k" podUID="b085997e-b05a-40ec-affb-73b810925afb" Nov 24 00:24:17.931268 containerd[1577]: time="2025-11-24T00:24:17.931091740Z" level=error msg="Failed to destroy network for sandbox \"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.932612 containerd[1577]: time="2025-11-24T00:24:17.932580608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kx96,Uid:4f15220c-2dc0-450b-9030-97efbcb4ef00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.932961 kubelet[2737]: E1124 00:24:17.932900 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:17.933032 kubelet[2737]: E1124 00:24:17.932985 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kx96" Nov 24 00:24:17.933032 kubelet[2737]: E1124 00:24:17.933009 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8kx96" Nov 24 00:24:17.933131 kubelet[2737]: E1124 00:24:17.933068 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8kx96_kube-system(4f15220c-2dc0-450b-9030-97efbcb4ef00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8kx96_kube-system(4f15220c-2dc0-450b-9030-97efbcb4ef00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d10d8501977cef72b19a6bf2609bbab0e88740a8742471c3e6e25e47f4e895cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8kx96" podUID="4f15220c-2dc0-450b-9030-97efbcb4ef00" Nov 24 00:24:17.984042 kubelet[2737]: E1124 00:24:17.984006 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:17.985215 containerd[1577]: time="2025-11-24T00:24:17.984887662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:24:18.824133 systemd[1]: Created slice kubepods-besteffort-pod50e0737c_0da4_4ca3_bede_949a700e86ed.slice - libcontainer container kubepods-besteffort-pod50e0737c_0da4_4ca3_bede_949a700e86ed.slice. Nov 24 00:24:18.826636 containerd[1577]: time="2025-11-24T00:24:18.826591660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmjpm,Uid:50e0737c-0da4-4ca3-bede-949a700e86ed,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:18.882981 containerd[1577]: time="2025-11-24T00:24:18.882891926Z" level=error msg="Failed to destroy network for sandbox \"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:18.884196 containerd[1577]: time="2025-11-24T00:24:18.884159438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmjpm,Uid:50e0737c-0da4-4ca3-bede-949a700e86ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:18.884486 kubelet[2737]: E1124 00:24:18.884418 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:24:18.884568 kubelet[2737]: E1124 00:24:18.884483 2737 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:18.884568 kubelet[2737]: E1124 00:24:18.884507 2737 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmjpm" Nov 24 00:24:18.884568 kubelet[2737]: E1124 00:24:18.884550 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"375734bf7bb5ec120fe4a14869e85a008a719c05ee98dfe7e46088ca72015582\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:18.885801 systemd[1]: run-netns-cni\x2d2e89d36d\x2dc241\x2d0349\x2dac8c\x2d9fcbb6113bad.mount: Deactivated successfully. Nov 24 00:24:22.908345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865353228.mount: Deactivated successfully. Nov 24 00:24:24.201521 containerd[1577]: time="2025-11-24T00:24:24.201461670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:24.204944 containerd[1577]: time="2025-11-24T00:24:24.204002461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:24:24.207216 containerd[1577]: time="2025-11-24T00:24:24.207170350Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:24.211402 containerd[1577]: time="2025-11-24T00:24:24.211345090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:24:24.211940 containerd[1577]: time="2025-11-24T00:24:24.211875356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.226933071s" Nov 24 00:24:24.213193 containerd[1577]: time="2025-11-24T00:24:24.213162123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:24:24.239532 containerd[1577]: time="2025-11-24T00:24:24.239484594Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:24:24.261768 containerd[1577]: time="2025-11-24T00:24:24.261714570Z" level=info msg="Container 8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:24.274213 containerd[1577]: time="2025-11-24T00:24:24.274168147Z" level=info msg="CreateContainer within sandbox \"28aeca98303edc7c0e9e40ba04cc76ca44b1308b48087bf74d1ab9a0d03da771\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4\"" Nov 24 00:24:24.274671 containerd[1577]: time="2025-11-24T00:24:24.274645693Z" level=info msg="StartContainer for \"8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4\"" Nov 24 00:24:24.276135 containerd[1577]: time="2025-11-24T00:24:24.276102820Z" level=info msg="connecting to shim 8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4" address="unix:///run/containerd/s/727a91548e38ea55d0e11b5d697007ebe900657695d1d9e1c35552d95b425ce2" protocol=ttrpc version=3 Nov 24 00:24:24.305123 systemd[1]: Started cri-containerd-8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4.scope - libcontainer container 8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4. Nov 24 00:24:24.404707 containerd[1577]: time="2025-11-24T00:24:24.404657304Z" level=info msg="StartContainer for \"8581c2de31cb5afcd7a59861b41e6b56f83c04294f6057448642db2222fcd7e4\" returns successfully" Nov 24 00:24:24.476832 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:24:24.477525 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:24:24.645381 kubelet[2737]: I1124 00:24:24.644663 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-ca-bundle\") pod \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " Nov 24 00:24:24.645381 kubelet[2737]: I1124 00:24:24.645115 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wj2q\" (UniqueName: \"kubernetes.io/projected/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-kube-api-access-7wj2q\") pod \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " Nov 24 00:24:24.645381 kubelet[2737]: I1124 00:24:24.645154 2737 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-backend-key-pair\") pod \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\" (UID: \"61ec7850-13cb-49a4-9e27-b7daa1c44ddf\") " Nov 24 00:24:24.647399 kubelet[2737]: I1124 00:24:24.647359 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61ec7850-13cb-49a4-9e27-b7daa1c44ddf" (UID: "61ec7850-13cb-49a4-9e27-b7daa1c44ddf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:24:24.651063 kubelet[2737]: I1124 00:24:24.650860 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-kube-api-access-7wj2q" (OuterVolumeSpecName: "kube-api-access-7wj2q") pod "61ec7850-13cb-49a4-9e27-b7daa1c44ddf" (UID: "61ec7850-13cb-49a4-9e27-b7daa1c44ddf"). InnerVolumeSpecName "kube-api-access-7wj2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:24:24.651063 kubelet[2737]: I1124 00:24:24.651018 2737 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61ec7850-13cb-49a4-9e27-b7daa1c44ddf" (UID: "61ec7850-13cb-49a4-9e27-b7daa1c44ddf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:24:24.745736 kubelet[2737]: I1124 00:24:24.745684 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 24 00:24:24.745736 kubelet[2737]: I1124 00:24:24.745719 2737 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7wj2q\" (UniqueName: \"kubernetes.io/projected/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-kube-api-access-7wj2q\") on node \"localhost\" DevicePath \"\"" Nov 24 00:24:24.745736 kubelet[2737]: I1124 00:24:24.745729 2737 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61ec7850-13cb-49a4-9e27-b7daa1c44ddf-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 24 00:24:25.001899 kubelet[2737]: E1124 00:24:25.001003 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:25.007280 systemd[1]: Removed slice kubepods-besteffort-pod61ec7850_13cb_49a4_9e27_b7daa1c44ddf.slice - libcontainer container kubepods-besteffort-pod61ec7850_13cb_49a4_9e27_b7daa1c44ddf.slice. Nov 24 00:24:25.017115 kubelet[2737]: I1124 00:24:25.017057 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtstl" podStartSLOduration=1.375838435 podStartE2EDuration="18.017040062s" podCreationTimestamp="2025-11-24 00:24:07 +0000 UTC" firstStartedPulling="2025-11-24 00:24:07.572944884 +0000 UTC m=+19.849984643" lastFinishedPulling="2025-11-24 00:24:24.214146511 +0000 UTC m=+36.491186270" observedRunningTime="2025-11-24 00:24:25.016064991 +0000 UTC m=+37.293104750" watchObservedRunningTime="2025-11-24 00:24:25.017040062 +0000 UTC m=+37.294079821" Nov 24 00:24:25.073811 systemd[1]: Created slice kubepods-besteffort-pod5e0a34e6_44a3_40cc_a842_058cfa585cfe.slice - libcontainer container kubepods-besteffort-pod5e0a34e6_44a3_40cc_a842_058cfa585cfe.slice. Nov 24 00:24:25.149077 kubelet[2737]: I1124 00:24:25.149018 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxm8\" (UniqueName: \"kubernetes.io/projected/5e0a34e6-44a3-40cc-a842-058cfa585cfe-kube-api-access-4sxm8\") pod \"whisker-cff675d48-d76lw\" (UID: \"5e0a34e6-44a3-40cc-a842-058cfa585cfe\") " pod="calico-system/whisker-cff675d48-d76lw" Nov 24 00:24:25.149077 kubelet[2737]: I1124 00:24:25.149067 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e0a34e6-44a3-40cc-a842-058cfa585cfe-whisker-ca-bundle\") pod \"whisker-cff675d48-d76lw\" (UID: \"5e0a34e6-44a3-40cc-a842-058cfa585cfe\") " pod="calico-system/whisker-cff675d48-d76lw" Nov 24 00:24:25.149260 kubelet[2737]: I1124 00:24:25.149098 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e0a34e6-44a3-40cc-a842-058cfa585cfe-whisker-backend-key-pair\") pod \"whisker-cff675d48-d76lw\" (UID: \"5e0a34e6-44a3-40cc-a842-058cfa585cfe\") " pod="calico-system/whisker-cff675d48-d76lw" Nov 24 00:24:25.222363 systemd[1]: var-lib-kubelet-pods-61ec7850\x2d13cb\x2d49a4\x2d9e27\x2db7daa1c44ddf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7wj2q.mount: Deactivated successfully. Nov 24 00:24:25.222494 systemd[1]: var-lib-kubelet-pods-61ec7850\x2d13cb\x2d49a4\x2d9e27\x2db7daa1c44ddf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:24:25.378046 containerd[1577]: time="2025-11-24T00:24:25.377986461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cff675d48-d76lw,Uid:5e0a34e6-44a3-40cc-a842-058cfa585cfe,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:25.531081 systemd-networkd[1470]: calidb6dc29e9e5: Link UP Nov 24 00:24:25.531459 systemd-networkd[1470]: calidb6dc29e9e5: Gained carrier Nov 24 00:24:25.546506 containerd[1577]: 2025-11-24 00:24:25.406 [INFO][3895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:24:25.546506 containerd[1577]: 2025-11-24 00:24:25.426 [INFO][3895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cff675d48--d76lw-eth0 whisker-cff675d48- calico-system 5e0a34e6-44a3-40cc-a842-058cfa585cfe 957 0 2025-11-24 00:24:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cff675d48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cff675d48-d76lw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb6dc29e9e5 [] [] }} ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-" Nov 24 00:24:25.546506 containerd[1577]: 2025-11-24 00:24:25.426 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.546506 containerd[1577]: 2025-11-24 00:24:25.490 [INFO][3909] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" HandleID="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Workload="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.491 [INFO][3909] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" HandleID="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Workload="localhost-k8s-whisker--cff675d48--d76lw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053e780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cff675d48-d76lw", "timestamp":"2025-11-24 00:24:25.490126432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.491 [INFO][3909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.491 [INFO][3909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.491 [INFO][3909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.499 [INFO][3909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" host="localhost" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.503 [INFO][3909] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.507 [INFO][3909] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.509 [INFO][3909] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.510 [INFO][3909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:25.546890 containerd[1577]: 2025-11-24 00:24:25.510 [INFO][3909] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" host="localhost" Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.512 [INFO][3909] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.515 [INFO][3909] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" host="localhost" Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.519 [INFO][3909] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" host="localhost" Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.519 [INFO][3909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" host="localhost" Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.519 [INFO][3909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:25.547176 containerd[1577]: 2025-11-24 00:24:25.519 [INFO][3909] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" HandleID="k8s-pod-network.702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Workload="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.547309 containerd[1577]: 2025-11-24 00:24:25.522 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cff675d48--d76lw-eth0", GenerateName:"whisker-cff675d48-", Namespace:"calico-system", SelfLink:"", UID:"5e0a34e6-44a3-40cc-a842-058cfa585cfe", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cff675d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cff675d48-d76lw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb6dc29e9e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:25.547309 containerd[1577]: 2025-11-24 00:24:25.523 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.547382 containerd[1577]: 2025-11-24 00:24:25.523 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb6dc29e9e5 ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.547382 containerd[1577]: 2025-11-24 00:24:25.532 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.547423 containerd[1577]: 2025-11-24 00:24:25.532 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cff675d48--d76lw-eth0", GenerateName:"whisker-cff675d48-", Namespace:"calico-system", SelfLink:"", UID:"5e0a34e6-44a3-40cc-a842-058cfa585cfe", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cff675d48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe", Pod:"whisker-cff675d48-d76lw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb6dc29e9e5", MAC:"26:57:8e:46:f5:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:25.547473 containerd[1577]: 2025-11-24 00:24:25.542 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" Namespace="calico-system" Pod="whisker-cff675d48-d76lw" WorkloadEndpoint="localhost-k8s-whisker--cff675d48--d76lw-eth0" Nov 24 00:24:25.597141 containerd[1577]: time="2025-11-24T00:24:25.597075586Z" level=info msg="connecting to shim 702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe" address="unix:///run/containerd/s/93869344e0e900de0dfa929de078787f5c19c2c8df6dfde88f7235c8b0cb6cde" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:25.630107 systemd[1]: Started cri-containerd-702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe.scope - libcontainer container 702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe. Nov 24 00:24:25.644158 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:25.675622 containerd[1577]: time="2025-11-24T00:24:25.675542177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cff675d48-d76lw,Uid:5e0a34e6-44a3-40cc-a842-058cfa585cfe,Namespace:calico-system,Attempt:0,} returns sandbox id \"702225a0ec184f8bfe946ff902b434076d231db32117f866e1400bb6ba807bfe\"" Nov 24 00:24:25.680037 containerd[1577]: time="2025-11-24T00:24:25.679942859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:24:25.825982 kubelet[2737]: I1124 00:24:25.824582 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ec7850-13cb-49a4-9e27-b7daa1c44ddf" path="/var/lib/kubelet/pods/61ec7850-13cb-49a4-9e27-b7daa1c44ddf/volumes" Nov 24 00:24:26.004376 kubelet[2737]: I1124 00:24:26.004255 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:24:26.004695 kubelet[2737]: E1124 00:24:26.004671 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:26.098336 containerd[1577]: time="2025-11-24T00:24:26.098264119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:26.764381 containerd[1577]: time="2025-11-24T00:24:26.764317703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:24:26.773095 containerd[1577]: time="2025-11-24T00:24:26.773028408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:24:26.773369 kubelet[2737]: E1124 00:24:26.773317 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:24:26.773438 kubelet[2737]: E1124 00:24:26.773384 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:24:26.779122 kubelet[2737]: E1124 00:24:26.779038 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:efeb101ec7c04178bfd3e17e39617b7e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:26.781172 containerd[1577]: time="2025-11-24T00:24:26.781138345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:24:26.848165 systemd-networkd[1470]: calidb6dc29e9e5: Gained IPv6LL Nov 24 00:24:27.133806 containerd[1577]: time="2025-11-24T00:24:27.133739200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:27.256595 containerd[1577]: time="2025-11-24T00:24:27.256518124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:24:27.256791 containerd[1577]: time="2025-11-24T00:24:27.256594157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:24:27.256821 kubelet[2737]: E1124 00:24:27.256742 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:24:27.256821 kubelet[2737]: E1124 00:24:27.256788 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:24:27.257306 kubelet[2737]: E1124 00:24:27.256958 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:27.258223 kubelet[2737]: E1124 00:24:27.258178 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe" Nov 24 00:24:27.357813 kubelet[2737]: I1124 00:24:27.357754 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:24:27.358159 kubelet[2737]: E1124 00:24:27.358124 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:28.008828 kubelet[2737]: E1124 00:24:28.008781 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:28.010318 kubelet[2737]: E1124 00:24:28.010268 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe" Nov 24 00:24:28.818660 containerd[1577]: time="2025-11-24T00:24:28.818599545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-hffj2,Uid:326ce385-d248-47a6-abc0-0a06e5c39a9a,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:24:28.819101 containerd[1577]: time="2025-11-24T00:24:28.818778963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4597d695-rxcnh,Uid:7060cdca-9b38-45cc-ad88-a15dcab99e92,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:29.044771 systemd-networkd[1470]: cali25a60f0ea3e: Link UP Nov 24 00:24:29.045905 systemd-networkd[1470]: cali25a60f0ea3e: Gained carrier Nov 24 00:24:29.059260 containerd[1577]: 2025-11-24 00:24:28.975 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0 calico-apiserver-76968cf4d5- calico-apiserver 326ce385-d248-47a6-abc0-0a06e5c39a9a 886 0 2025-11-24 00:24:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76968cf4d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76968cf4d5-hffj2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25a60f0ea3e [] [] }} ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-" Nov 24 00:24:29.059260 containerd[1577]: 2025-11-24 00:24:28.976 [INFO][4154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.059260 containerd[1577]: 2025-11-24 00:24:29.004 [INFO][4193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" HandleID="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Workload="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" HandleID="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Workload="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c20f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76968cf4d5-hffj2", "timestamp":"2025-11-24 00:24:29.004792301 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.011 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" host="localhost" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.023 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.026 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.027 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.029 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.059489 containerd[1577]: 2025-11-24 00:24:29.029 [INFO][4193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" host="localhost" Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.030 [INFO][4193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21 Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.033 [INFO][4193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" host="localhost" Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" host="localhost" Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" host="localhost" Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:29.059954 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" HandleID="k8s-pod-network.336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Workload="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.060100 containerd[1577]: 2025-11-24 00:24:29.041 [INFO][4154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0", GenerateName:"calico-apiserver-76968cf4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"326ce385-d248-47a6-abc0-0a06e5c39a9a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76968cf4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76968cf4d5-hffj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a60f0ea3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.060160 containerd[1577]: 2025-11-24 00:24:29.042 [INFO][4154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.060160 containerd[1577]: 2025-11-24 00:24:29.042 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a60f0ea3e ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.060160 containerd[1577]: 2025-11-24 00:24:29.046 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.060223 containerd[1577]: 2025-11-24 00:24:29.046 [INFO][4154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0", GenerateName:"calico-apiserver-76968cf4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"326ce385-d248-47a6-abc0-0a06e5c39a9a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76968cf4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21", Pod:"calico-apiserver-76968cf4d5-hffj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a60f0ea3e", MAC:"82:b7:8f:c8:e4:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.060278 containerd[1577]: 2025-11-24 00:24:29.056 [INFO][4154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-hffj2" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--hffj2-eth0" Nov 24 00:24:29.107019 containerd[1577]: time="2025-11-24T00:24:29.105247636Z" level=info msg="connecting to shim 336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21" address="unix:///run/containerd/s/1e5654dd9e11d6d8915330db95aab241dec6276f79e6038b4c2dd5487d71bca5" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:29.139126 systemd[1]: Started cri-containerd-336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21.scope - libcontainer container 336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21. Nov 24 00:24:29.149565 systemd-networkd[1470]: cali038dbf57121: Link UP Nov 24 00:24:29.150140 systemd-networkd[1470]: cali038dbf57121: Gained carrier Nov 24 00:24:29.160516 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:29.164816 containerd[1577]: 2025-11-24 00:24:28.975 [INFO][4165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0 calico-kube-controllers-6b4597d695- calico-system 7060cdca-9b38-45cc-ad88-a15dcab99e92 887 0 2025-11-24 00:24:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b4597d695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b4597d695-rxcnh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali038dbf57121 [] [] }} ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-" Nov 24 00:24:29.164816 containerd[1577]: 2025-11-24 00:24:28.975 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.164816 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4195] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" HandleID="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Workload="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4195] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" HandleID="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Workload="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b3480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b4597d695-rxcnh", "timestamp":"2025-11-24 00:24:29.005352011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.005 [INFO][4195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.038 [INFO][4195] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.112 [INFO][4195] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" host="localhost" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.124 [INFO][4195] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.130 [INFO][4195] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.133 [INFO][4195] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.134 [INFO][4195] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.165011 containerd[1577]: 2025-11-24 00:24:29.134 [INFO][4195] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" host="localhost" Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.136 [INFO][4195] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.138 [INFO][4195] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" host="localhost" Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.143 [INFO][4195] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" host="localhost" Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.143 [INFO][4195] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" host="localhost" Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.143 [INFO][4195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:29.165264 containerd[1577]: 2025-11-24 00:24:29.143 [INFO][4195] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" HandleID="k8s-pod-network.0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Workload="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.165421 containerd[1577]: 2025-11-24 00:24:29.146 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0", GenerateName:"calico-kube-controllers-6b4597d695-", Namespace:"calico-system", SelfLink:"", UID:"7060cdca-9b38-45cc-ad88-a15dcab99e92", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4597d695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b4597d695-rxcnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038dbf57121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.165475 containerd[1577]: 2025-11-24 00:24:29.147 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.165475 containerd[1577]: 2025-11-24 00:24:29.147 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali038dbf57121 ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.165475 containerd[1577]: 2025-11-24 00:24:29.150 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.165542 containerd[1577]: 2025-11-24 00:24:29.151 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0", GenerateName:"calico-kube-controllers-6b4597d695-", Namespace:"calico-system", SelfLink:"", UID:"7060cdca-9b38-45cc-ad88-a15dcab99e92", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4597d695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a", Pod:"calico-kube-controllers-6b4597d695-rxcnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038dbf57121", MAC:"0e:b0:75:3a:30:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.165594 containerd[1577]: 2025-11-24 00:24:29.161 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" Namespace="calico-system" Pod="calico-kube-controllers-6b4597d695-rxcnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4597d695--rxcnh-eth0" Nov 24 00:24:29.188292 containerd[1577]: time="2025-11-24T00:24:29.188211496Z" level=info msg="connecting to shim 0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a" address="unix:///run/containerd/s/c33db579caf14cb0a9bb592cb82ed060880cf5e6f5c199f74111c57cc8eca6b8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:29.206681 containerd[1577]: time="2025-11-24T00:24:29.206621285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-hffj2,Uid:326ce385-d248-47a6-abc0-0a06e5c39a9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"336272a5aab86f6575ce5b25b8e0e12b57199b4456595bff15b1efb193f21d21\"" Nov 24 00:24:29.210043 containerd[1577]: time="2025-11-24T00:24:29.209565873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:24:29.219146 systemd[1]: Started cri-containerd-0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a.scope - libcontainer container 0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a. Nov 24 00:24:29.229327 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:35398.service - OpenSSH per-connection server daemon (10.0.0.1:35398). Nov 24 00:24:29.235239 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:29.293432 containerd[1577]: time="2025-11-24T00:24:29.293370852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4597d695-rxcnh,Uid:7060cdca-9b38-45cc-ad88-a15dcab99e92,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bae9a86e1509acd6387f5dc72a58c9a28b609d9233b3eee62043105548b599a\"" Nov 24 00:24:29.296898 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 35398 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:29.298737 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:29.306082 systemd-logind[1539]: New session 8 of user core. Nov 24 00:24:29.310078 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:24:29.321068 systemd-networkd[1470]: vxlan.calico: Link UP Nov 24 00:24:29.321078 systemd-networkd[1470]: vxlan.calico: Gained carrier Nov 24 00:24:29.455106 sshd[4340]: Connection closed by 10.0.0.1 port 35398 Nov 24 00:24:29.456864 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:29.461278 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:35398.service: Deactivated successfully. Nov 24 00:24:29.463540 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:24:29.464392 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:24:29.465750 systemd-logind[1539]: Removed session 8. Nov 24 00:24:29.586904 containerd[1577]: time="2025-11-24T00:24:29.586859101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:29.589060 containerd[1577]: time="2025-11-24T00:24:29.588954103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:24:29.589060 containerd[1577]: time="2025-11-24T00:24:29.589022973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:29.589367 kubelet[2737]: E1124 00:24:29.589328 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:29.590722 kubelet[2737]: E1124 00:24:29.589749 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:29.590722 kubelet[2737]: E1124 00:24:29.590003 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llxvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-hffj2_calico-apiserver(326ce385-d248-47a6-abc0-0a06e5c39a9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:29.591188 containerd[1577]: time="2025-11-24T00:24:29.591053485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:24:29.591231 kubelet[2737]: E1124 00:24:29.591147 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:29.819253 containerd[1577]: time="2025-11-24T00:24:29.819204231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ds9ng,Uid:0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:29.920370 systemd-networkd[1470]: cali200be2eab7f: Link UP Nov 24 00:24:29.920557 systemd-networkd[1470]: cali200be2eab7f: Gained carrier Nov 24 00:24:29.933258 containerd[1577]: 2025-11-24 00:24:29.856 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ds9ng-eth0 goldmane-666569f655- calico-system 0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c 891 0 2025-11-24 00:24:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ds9ng eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali200be2eab7f [] [] }} ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-" Nov 24 00:24:29.933258 containerd[1577]: 2025-11-24 00:24:29.857 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.933258 containerd[1577]: 2025-11-24 00:24:29.884 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" HandleID="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Workload="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.884 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" HandleID="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Workload="localhost-k8s-goldmane--666569f655--ds9ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050fee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ds9ng", "timestamp":"2025-11-24 00:24:29.88411418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.884 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.884 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.884 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.890 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" host="localhost" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.894 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.898 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.900 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.902 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:29.933467 containerd[1577]: 2025-11-24 00:24:29.902 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" host="localhost" Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.904 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.907 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" host="localhost" Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.911 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" host="localhost" Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.911 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" host="localhost" Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.911 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:29.933681 containerd[1577]: 2025-11-24 00:24:29.911 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" HandleID="k8s-pod-network.19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Workload="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.933805 containerd[1577]: 2025-11-24 00:24:29.918 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ds9ng-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ds9ng", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali200be2eab7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.933805 containerd[1577]: 2025-11-24 00:24:29.918 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.933963 containerd[1577]: 2025-11-24 00:24:29.918 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali200be2eab7f ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.933963 containerd[1577]: 2025-11-24 00:24:29.920 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.934014 containerd[1577]: 2025-11-24 00:24:29.921 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ds9ng-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e", Pod:"goldmane-666569f655-ds9ng", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali200be2eab7f", MAC:"b6:e5:47:b2:50:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:29.934067 containerd[1577]: 2025-11-24 00:24:29.929 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" Namespace="calico-system" Pod="goldmane-666569f655-ds9ng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ds9ng-eth0" Nov 24 00:24:29.949880 containerd[1577]: time="2025-11-24T00:24:29.949813461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:29.951679 containerd[1577]: time="2025-11-24T00:24:29.951636033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:24:29.951756 containerd[1577]: time="2025-11-24T00:24:29.951700755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:24:29.951946 kubelet[2737]: E1124 00:24:29.951890 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:24:29.952031 kubelet[2737]: E1124 00:24:29.951966 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:24:29.952151 kubelet[2737]: E1124 00:24:29.952110 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj22n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4597d695-rxcnh_calico-system(7060cdca-9b38-45cc-ad88-a15dcab99e92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:29.953318 kubelet[2737]: E1124 00:24:29.953284 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:29.972511 containerd[1577]: time="2025-11-24T00:24:29.972469481Z" level=info msg="connecting to shim 19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e" address="unix:///run/containerd/s/9c6a7fd129792f1778a373d2f3ed5c8d344f8f66509127196d7e94024dd11432" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:29.997062 systemd[1]: Started cri-containerd-19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e.scope - libcontainer container 19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e. Nov 24 00:24:30.010396 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:30.016426 kubelet[2737]: E1124 00:24:30.015887 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:30.019197 kubelet[2737]: E1124 00:24:30.019170 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:30.046851 containerd[1577]: time="2025-11-24T00:24:30.046760096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ds9ng,Uid:0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"19bf1f1e2c885e482ddc63ed10611f1420c5163c4ceca534f1bcfb96922ae93e\"" Nov 24 00:24:30.048226 containerd[1577]: time="2025-11-24T00:24:30.048200799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:24:30.240134 systemd-networkd[1470]: cali25a60f0ea3e: Gained IPv6LL Nov 24 00:24:30.363711 containerd[1577]: time="2025-11-24T00:24:30.363639697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:30.366487 containerd[1577]: time="2025-11-24T00:24:30.366440524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:24:30.366523 containerd[1577]: time="2025-11-24T00:24:30.366512479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:30.366722 kubelet[2737]: E1124 00:24:30.366665 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:24:30.366791 kubelet[2737]: E1124 00:24:30.366726 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:24:30.366985 kubelet[2737]: E1124 00:24:30.366895 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfdvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ds9ng_calico-system(0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:30.368119 kubelet[2737]: E1124 00:24:30.368086 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:24:30.560983 systemd-networkd[1470]: vxlan.calico: Gained IPv6LL Nov 24 00:24:30.818589 kubelet[2737]: E1124 00:24:30.818402 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:30.819210 containerd[1577]: time="2025-11-24T00:24:30.818821515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kx96,Uid:4f15220c-2dc0-450b-9030-97efbcb4ef00,Namespace:kube-system,Attempt:0,}" Nov 24 00:24:30.881450 systemd-networkd[1470]: cali038dbf57121: Gained IPv6LL Nov 24 00:24:30.935179 systemd-networkd[1470]: calic32367820ee: Link UP Nov 24 00:24:30.935445 systemd-networkd[1470]: calic32367820ee: Gained carrier Nov 24 00:24:30.948008 containerd[1577]: 2025-11-24 00:24:30.868 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8kx96-eth0 coredns-674b8bbfcf- kube-system 4f15220c-2dc0-450b-9030-97efbcb4ef00 888 0 2025-11-24 00:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8kx96 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic32367820ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-" Nov 24 00:24:30.948008 containerd[1577]: 2025-11-24 00:24:30.868 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.948008 containerd[1577]: 2025-11-24 00:24:30.897 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" HandleID="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Workload="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.897 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" HandleID="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Workload="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c78d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8kx96", "timestamp":"2025-11-24 00:24:30.897527512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.897 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.897 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.897 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.903 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" host="localhost" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.907 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.912 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.915 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.917 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:30.948582 containerd[1577]: 2025-11-24 00:24:30.917 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" host="localhost" Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.919 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9 Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.923 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" host="localhost" Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.929 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" host="localhost" Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.929 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" host="localhost" Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.929 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:30.948973 containerd[1577]: 2025-11-24 00:24:30.929 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" HandleID="k8s-pod-network.3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Workload="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.950580 containerd[1577]: 2025-11-24 00:24:30.933 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8kx96-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f15220c-2dc0-450b-9030-97efbcb4ef00", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8kx96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic32367820ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:30.950678 containerd[1577]: 2025-11-24 00:24:30.933 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.950678 containerd[1577]: 2025-11-24 00:24:30.933 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic32367820ee ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.950678 containerd[1577]: 2025-11-24 00:24:30.935 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.950782 containerd[1577]: 2025-11-24 00:24:30.936 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8kx96-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f15220c-2dc0-450b-9030-97efbcb4ef00", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9", Pod:"coredns-674b8bbfcf-8kx96", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic32367820ee", MAC:"66:77:99:9f:c0:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:30.950782 containerd[1577]: 2025-11-24 00:24:30.943 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" Namespace="kube-system" Pod="coredns-674b8bbfcf-8kx96" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8kx96-eth0" Nov 24 00:24:30.994869 containerd[1577]: time="2025-11-24T00:24:30.994816323Z" level=info msg="connecting to shim 3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9" address="unix:///run/containerd/s/1bc93194ee1c0b3e179579ab11cc1e378802234670ae8e410d9b2efd42492992" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:31.026481 kubelet[2737]: E1124 00:24:31.026420 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:24:31.026757 systemd[1]: Started cri-containerd-3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9.scope - libcontainer container 3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9. Nov 24 00:24:31.027288 kubelet[2737]: E1124 00:24:31.026950 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:31.027667 kubelet[2737]: E1124 00:24:31.027389 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:31.051814 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:31.085695 containerd[1577]: time="2025-11-24T00:24:31.085573613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8kx96,Uid:4f15220c-2dc0-450b-9030-97efbcb4ef00,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9\"" Nov 24 00:24:31.086497 kubelet[2737]: E1124 00:24:31.086460 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:31.098066 containerd[1577]: time="2025-11-24T00:24:31.098005998Z" level=info msg="CreateContainer within sandbox \"3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:24:31.111202 containerd[1577]: time="2025-11-24T00:24:31.111122749Z" level=info msg="Container 816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:31.113639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743877121.mount: Deactivated successfully. Nov 24 00:24:31.119392 containerd[1577]: time="2025-11-24T00:24:31.119352095Z" level=info msg="CreateContainer within sandbox \"3f7d090f33fd9d4b0ad5d16dbadadc8c190e06d2acce695070b0bb8693c142a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea\"" Nov 24 00:24:31.119930 containerd[1577]: time="2025-11-24T00:24:31.119895164Z" level=info msg="StartContainer for \"816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea\"" Nov 24 00:24:31.120729 containerd[1577]: time="2025-11-24T00:24:31.120703481Z" level=info msg="connecting to shim 816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea" address="unix:///run/containerd/s/1bc93194ee1c0b3e179579ab11cc1e378802234670ae8e410d9b2efd42492992" protocol=ttrpc version=3 Nov 24 00:24:31.143048 systemd[1]: Started cri-containerd-816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea.scope - libcontainer container 816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea. Nov 24 00:24:31.177812 containerd[1577]: time="2025-11-24T00:24:31.177766297Z" level=info msg="StartContainer for \"816b5c3b3af0ddbfec078c964dd189419fc82617c468dda98398e5259801f7ea\" returns successfully" Nov 24 00:24:31.457105 systemd-networkd[1470]: cali200be2eab7f: Gained IPv6LL Nov 24 00:24:31.819240 containerd[1577]: time="2025-11-24T00:24:31.819198274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-nq647,Uid:daf1acba-9d5d-4e4f-ade2-53de43ab5a20,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:24:31.937745 systemd-networkd[1470]: cali5f6888e8139: Link UP Nov 24 00:24:31.940107 systemd-networkd[1470]: cali5f6888e8139: Gained carrier Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.853 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0 calico-apiserver-76968cf4d5- calico-apiserver daf1acba-9d5d-4e4f-ade2-53de43ab5a20 889 0 2025-11-24 00:24:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76968cf4d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76968cf4d5-nq647 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f6888e8139 [] [] }} ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.853 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.878 [INFO][4621] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" HandleID="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Workload="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.878 [INFO][4621] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" HandleID="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Workload="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76968cf4d5-nq647", "timestamp":"2025-11-24 00:24:31.878586134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.878 [INFO][4621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.878 [INFO][4621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.878 [INFO][4621] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.884 [INFO][4621] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.889 [INFO][4621] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.892 [INFO][4621] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.893 [INFO][4621] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.895 [INFO][4621] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.895 [INFO][4621] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.896 [INFO][4621] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.902 [INFO][4621] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.908 [INFO][4621] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.908 [INFO][4621] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" host="localhost" Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.908 [INFO][4621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:31.958601 containerd[1577]: 2025-11-24 00:24:31.908 [INFO][4621] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" HandleID="k8s-pod-network.9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Workload="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.919 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0", GenerateName:"calico-apiserver-76968cf4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf1acba-9d5d-4e4f-ade2-53de43ab5a20", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76968cf4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76968cf4d5-nq647", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6888e8139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.919 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.919 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f6888e8139 ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.930 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.930 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0", GenerateName:"calico-apiserver-76968cf4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"daf1acba-9d5d-4e4f-ade2-53de43ab5a20", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76968cf4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a", Pod:"calico-apiserver-76968cf4d5-nq647", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6888e8139", MAC:"26:e2:38:20:42:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:31.959704 containerd[1577]: 2025-11-24 00:24:31.948 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" Namespace="calico-apiserver" Pod="calico-apiserver-76968cf4d5-nq647" WorkloadEndpoint="localhost-k8s-calico--apiserver--76968cf4d5--nq647-eth0" Nov 24 00:24:31.981119 containerd[1577]: time="2025-11-24T00:24:31.981058523Z" level=info msg="connecting to shim 9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a" address="unix:///run/containerd/s/a0d9a21a34292af176e3bb760622b6fb7afc5cd3910ab6b16792f9a184ad48f2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:32.013058 systemd[1]: Started cri-containerd-9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a.scope - libcontainer container 9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a. Nov 24 00:24:32.027079 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:32.029673 kubelet[2737]: E1124 00:24:32.029623 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:32.030569 kubelet[2737]: E1124 00:24:32.030532 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:24:32.064141 containerd[1577]: time="2025-11-24T00:24:32.064066953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76968cf4d5-nq647,Uid:daf1acba-9d5d-4e4f-ade2-53de43ab5a20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9714ef2d13b22f7d8579f98b844fa26f68ca13e677555381082a1ccb846b665a\"" Nov 24 00:24:32.065518 containerd[1577]: time="2025-11-24T00:24:32.065487909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:24:32.160404 systemd-networkd[1470]: calic32367820ee: Gained IPv6LL Nov 24 00:24:32.389643 containerd[1577]: time="2025-11-24T00:24:32.389583528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:32.390725 containerd[1577]: time="2025-11-24T00:24:32.390673014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:24:32.390725 containerd[1577]: time="2025-11-24T00:24:32.390740871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:32.390966 kubelet[2737]: E1124 00:24:32.390900 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:32.391014 kubelet[2737]: E1124 00:24:32.390978 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:32.391172 kubelet[2737]: E1124 00:24:32.391133 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w2k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-nq647_calico-apiserver(daf1acba-9d5d-4e4f-ade2-53de43ab5a20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:32.392604 kubelet[2737]: E1124 00:24:32.392555 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:32.818381 kubelet[2737]: E1124 00:24:32.818334 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:32.818812 containerd[1577]: time="2025-11-24T00:24:32.818774780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhn4k,Uid:b085997e-b05a-40ec-affb-73b810925afb,Namespace:kube-system,Attempt:0,}" Nov 24 00:24:33.032260 kubelet[2737]: E1124 00:24:33.032226 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:33.032866 kubelet[2737]: E1124 00:24:33.032791 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:33.056092 systemd-networkd[1470]: cali5f6888e8139: Gained IPv6LL Nov 24 00:24:33.577953 kubelet[2737]: I1124 00:24:33.577070 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8kx96" podStartSLOduration=40.577054998 podStartE2EDuration="40.577054998s" podCreationTimestamp="2025-11-24 00:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:24:32.052155406 +0000 UTC m=+44.329195165" watchObservedRunningTime="2025-11-24 00:24:33.577054998 +0000 UTC m=+45.854094757" Nov 24 00:24:33.667381 systemd-networkd[1470]: cali6babfe76a9b: Link UP Nov 24 00:24:33.667630 systemd-networkd[1470]: cali6babfe76a9b: Gained carrier Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.587 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0 coredns-674b8bbfcf- kube-system b085997e-b05a-40ec-affb-73b810925afb 890 0 2025-11-24 00:23:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mhn4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6babfe76a9b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.588 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.623 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" HandleID="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Workload="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.624 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" HandleID="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Workload="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mhn4k", "timestamp":"2025-11-24 00:24:33.623799688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.624 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.624 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.624 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.633 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.640 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.644 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.646 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.649 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.649 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.650 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839 Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.654 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.660 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.660 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" host="localhost" Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.660 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:33.683661 containerd[1577]: 2025-11-24 00:24:33.660 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" HandleID="k8s-pod-network.bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Workload="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.664 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b085997e-b05a-40ec-affb-73b810925afb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mhn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6babfe76a9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.665 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.665 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6babfe76a9b ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.667 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.668 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b085997e-b05a-40ec-affb-73b810925afb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839", Pod:"coredns-674b8bbfcf-mhn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6babfe76a9b", MAC:"12:51:b3:6b:6d:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:33.686114 containerd[1577]: 2025-11-24 00:24:33.677 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" Namespace="kube-system" Pod="coredns-674b8bbfcf-mhn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mhn4k-eth0" Nov 24 00:24:33.723716 containerd[1577]: time="2025-11-24T00:24:33.723630249Z" level=info msg="connecting to shim bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839" address="unix:///run/containerd/s/5dfffd809f6bad917e915d59fb7585edfd1d418bf6ce36238fa1aec9fb598931" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:33.753109 systemd[1]: Started cri-containerd-bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839.scope - libcontainer container bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839. Nov 24 00:24:33.771401 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:33.803795 containerd[1577]: time="2025-11-24T00:24:33.803751730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mhn4k,Uid:b085997e-b05a-40ec-affb-73b810925afb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839\"" Nov 24 00:24:33.804557 kubelet[2737]: E1124 00:24:33.804521 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:33.809524 containerd[1577]: time="2025-11-24T00:24:33.809495650Z" level=info msg="CreateContainer within sandbox \"bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:24:33.818803 containerd[1577]: time="2025-11-24T00:24:33.818482336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmjpm,Uid:50e0737c-0da4-4ca3-bede-949a700e86ed,Namespace:calico-system,Attempt:0,}" Nov 24 00:24:33.821222 containerd[1577]: time="2025-11-24T00:24:33.821195789Z" level=info msg="Container 14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:24:33.832482 containerd[1577]: time="2025-11-24T00:24:33.832354341Z" level=info msg="CreateContainer within sandbox \"bfa8f16c59ebc7bd05896872544b3ad3eb5ecd4844bdaeed0b9cc93986dd7839\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4\"" Nov 24 00:24:33.834706 containerd[1577]: time="2025-11-24T00:24:33.834396123Z" level=info msg="StartContainer for \"14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4\"" Nov 24 00:24:33.835398 containerd[1577]: time="2025-11-24T00:24:33.835336477Z" level=info msg="connecting to shim 14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4" address="unix:///run/containerd/s/5dfffd809f6bad917e915d59fb7585edfd1d418bf6ce36238fa1aec9fb598931" protocol=ttrpc version=3 Nov 24 00:24:33.863193 systemd[1]: Started cri-containerd-14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4.scope - libcontainer container 14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4. Nov 24 00:24:33.911944 containerd[1577]: time="2025-11-24T00:24:33.911834759Z" level=info msg="StartContainer for \"14c0af60fa8914fed3936cfab9bdecc4c94556d8ebd9cd6ee2142da2a02087a4\" returns successfully" Nov 24 00:24:33.932574 systemd-networkd[1470]: cali364a6d1ea96: Link UP Nov 24 00:24:33.933561 systemd-networkd[1470]: cali364a6d1ea96: Gained carrier Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.854 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hmjpm-eth0 csi-node-driver- calico-system 50e0737c-0da4-4ca3-bede-949a700e86ed 773 0 2025-11-24 00:24:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hmjpm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali364a6d1ea96 [] [] }} ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.854 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.889 [INFO][4804] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" HandleID="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Workload="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.890 [INFO][4804] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" HandleID="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Workload="localhost-k8s-csi--node--driver--hmjpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hmjpm", "timestamp":"2025-11-24 00:24:33.88933459 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.890 [INFO][4804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.890 [INFO][4804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.890 [INFO][4804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.898 [INFO][4804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.902 [INFO][4804] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.906 [INFO][4804] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.908 [INFO][4804] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.910 [INFO][4804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.910 [INFO][4804] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.915 [INFO][4804] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102 Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.920 [INFO][4804] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.926 [INFO][4804] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.926 [INFO][4804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" host="localhost" Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.926 [INFO][4804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:24:33.949513 containerd[1577]: 2025-11-24 00:24:33.926 [INFO][4804] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" HandleID="k8s-pod-network.6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Workload="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.929 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hmjpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"50e0737c-0da4-4ca3-bede-949a700e86ed", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hmjpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali364a6d1ea96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.930 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.930 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali364a6d1ea96 ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.934 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.934 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hmjpm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"50e0737c-0da4-4ca3-bede-949a700e86ed", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 24, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102", Pod:"csi-node-driver-hmjpm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali364a6d1ea96", MAC:"42:8b:2e:f2:d2:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:24:33.951242 containerd[1577]: 2025-11-24 00:24:33.942 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" Namespace="calico-system" Pod="csi-node-driver-hmjpm" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmjpm-eth0" Nov 24 00:24:33.972585 containerd[1577]: time="2025-11-24T00:24:33.972543326Z" level=info msg="connecting to shim 6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102" address="unix:///run/containerd/s/e55214081f3e74281303f1ca8d885485a551f811bdce17f9adf2abfd05750efc" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:24:33.994120 systemd[1]: Started cri-containerd-6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102.scope - libcontainer container 6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102. Nov 24 00:24:34.014292 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 00:24:34.034413 containerd[1577]: time="2025-11-24T00:24:34.034353467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmjpm,Uid:50e0737c-0da4-4ca3-bede-949a700e86ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"6860a58f57def77f69748d6e0c2fbb0448105547f461d334242a801e528a7102\"" Nov 24 00:24:34.040394 containerd[1577]: time="2025-11-24T00:24:34.040288425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:24:34.042198 kubelet[2737]: E1124 00:24:34.042155 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:34.042656 kubelet[2737]: E1124 00:24:34.042280 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:34.043736 kubelet[2737]: E1124 00:24:34.043120 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:34.065955 kubelet[2737]: I1124 00:24:34.065034 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mhn4k" podStartSLOduration=41.064980244 podStartE2EDuration="41.064980244s" podCreationTimestamp="2025-11-24 00:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:24:34.062880012 +0000 UTC m=+46.339919771" watchObservedRunningTime="2025-11-24 00:24:34.064980244 +0000 UTC m=+46.342020003" Nov 24 00:24:34.285601 kubelet[2737]: I1124 00:24:34.285392 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:24:34.287802 kubelet[2737]: E1124 00:24:34.287464 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:34.356340 containerd[1577]: time="2025-11-24T00:24:34.356284924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:34.357373 containerd[1577]: time="2025-11-24T00:24:34.357339464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:24:34.357429 containerd[1577]: time="2025-11-24T00:24:34.357394958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:24:34.357590 kubelet[2737]: E1124 00:24:34.357545 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:24:34.357642 kubelet[2737]: E1124 00:24:34.357595 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:24:34.357780 kubelet[2737]: E1124 00:24:34.357738 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:34.359897 containerd[1577]: time="2025-11-24T00:24:34.359863560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:24:34.470047 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:41230.service - OpenSSH per-connection server daemon (10.0.0.1:41230). Nov 24 00:24:34.543600 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 41230 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:34.545183 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:34.549564 systemd-logind[1539]: New session 9 of user core. Nov 24 00:24:34.562049 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:24:34.677386 containerd[1577]: time="2025-11-24T00:24:34.677331963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:34.678533 containerd[1577]: time="2025-11-24T00:24:34.678465500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:24:34.678636 containerd[1577]: time="2025-11-24T00:24:34.678586337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:24:34.679004 kubelet[2737]: E1124 00:24:34.678892 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:24:34.679056 kubelet[2737]: E1124 00:24:34.679002 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:24:34.679513 kubelet[2737]: E1124 00:24:34.679439 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:34.681170 kubelet[2737]: E1124 00:24:34.681089 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:34.692958 sshd[4944]: Connection closed by 10.0.0.1 port 41230 Nov 24 00:24:34.692496 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:34.697025 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:41230.service: Deactivated successfully. Nov 24 00:24:34.700742 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:24:34.702036 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:24:34.703646 systemd-logind[1539]: Removed session 9. Nov 24 00:24:35.044761 kubelet[2737]: E1124 00:24:35.044718 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:35.045937 kubelet[2737]: E1124 00:24:35.045865 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:35.046055 kubelet[2737]: E1124 00:24:35.046030 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:35.616204 systemd-networkd[1470]: cali364a6d1ea96: Gained IPv6LL Nov 24 00:24:35.616779 systemd-networkd[1470]: cali6babfe76a9b: Gained IPv6LL Nov 24 00:24:36.046315 kubelet[2737]: E1124 00:24:36.046281 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:24:36.048434 kubelet[2737]: E1124 00:24:36.048027 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:39.704168 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:41244.service - OpenSSH per-connection server daemon (10.0.0.1:41244). Nov 24 00:24:39.757949 sshd[4977]: Accepted publickey for core from 10.0.0.1 port 41244 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:39.759401 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:39.763998 systemd-logind[1539]: New session 10 of user core. Nov 24 00:24:39.772122 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:24:39.889541 sshd[4980]: Connection closed by 10.0.0.1 port 41244 Nov 24 00:24:39.890131 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:39.901410 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:41244.service: Deactivated successfully. Nov 24 00:24:39.903655 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:24:39.904460 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:24:39.908131 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:41260.service - OpenSSH per-connection server daemon (10.0.0.1:41260). Nov 24 00:24:39.908808 systemd-logind[1539]: Removed session 10. Nov 24 00:24:39.970129 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 41260 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:39.971839 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:39.976770 systemd-logind[1539]: New session 11 of user core. Nov 24 00:24:39.984122 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:24:40.146438 sshd[4997]: Connection closed by 10.0.0.1 port 41260 Nov 24 00:24:40.147852 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:40.158044 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:41260.service: Deactivated successfully. Nov 24 00:24:40.161106 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:24:40.163263 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:24:40.167130 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:41262.service - OpenSSH per-connection server daemon (10.0.0.1:41262). Nov 24 00:24:40.167936 systemd-logind[1539]: Removed session 11. Nov 24 00:24:40.218786 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 41262 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:40.220623 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:40.225469 systemd-logind[1539]: New session 12 of user core. Nov 24 00:24:40.236067 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:24:40.345280 sshd[5011]: Connection closed by 10.0.0.1 port 41262 Nov 24 00:24:40.345635 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:40.350602 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:41262.service: Deactivated successfully. Nov 24 00:24:40.353041 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:24:40.353799 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:24:40.355483 systemd-logind[1539]: Removed session 12. Nov 24 00:24:42.819841 containerd[1577]: time="2025-11-24T00:24:42.819557498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:24:43.137942 containerd[1577]: time="2025-11-24T00:24:43.137781830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:43.165090 containerd[1577]: time="2025-11-24T00:24:43.165026852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:24:43.165215 containerd[1577]: time="2025-11-24T00:24:43.165096116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:24:43.165401 kubelet[2737]: E1124 00:24:43.165349 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:24:43.165854 kubelet[2737]: E1124 00:24:43.165410 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:24:43.165854 kubelet[2737]: E1124 00:24:43.165705 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj22n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4597d695-rxcnh_calico-system(7060cdca-9b38-45cc-ad88-a15dcab99e92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:43.166158 containerd[1577]: time="2025-11-24T00:24:43.166115977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:24:43.167423 kubelet[2737]: E1124 00:24:43.167304 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:43.590189 containerd[1577]: time="2025-11-24T00:24:43.590126592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:43.618143 containerd[1577]: time="2025-11-24T00:24:43.618014086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:24:43.618143 containerd[1577]: time="2025-11-24T00:24:43.618071218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:43.618398 kubelet[2737]: E1124 00:24:43.618331 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:43.618444 kubelet[2737]: E1124 00:24:43.618398 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:43.618738 kubelet[2737]: E1124 00:24:43.618674 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llxvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-hffj2_calico-apiserver(326ce385-d248-47a6-abc0-0a06e5c39a9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:43.618841 containerd[1577]: time="2025-11-24T00:24:43.618723528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:24:43.620049 kubelet[2737]: E1124 00:24:43.619996 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:44.010236 containerd[1577]: time="2025-11-24T00:24:44.010074880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:44.077359 containerd[1577]: time="2025-11-24T00:24:44.077283053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:24:44.077359 containerd[1577]: time="2025-11-24T00:24:44.077326497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:24:44.077595 kubelet[2737]: E1124 00:24:44.077450 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:24:44.077595 kubelet[2737]: E1124 00:24:44.077490 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:24:44.077690 kubelet[2737]: E1124 00:24:44.077608 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:efeb101ec7c04178bfd3e17e39617b7e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:44.079682 containerd[1577]: time="2025-11-24T00:24:44.079647591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:24:44.448193 containerd[1577]: time="2025-11-24T00:24:44.448121157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:44.449347 containerd[1577]: time="2025-11-24T00:24:44.449314233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:24:44.449424 containerd[1577]: time="2025-11-24T00:24:44.449356664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:24:44.449549 kubelet[2737]: E1124 00:24:44.449501 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:24:44.449894 kubelet[2737]: E1124 00:24:44.449553 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:24:44.449894 kubelet[2737]: E1124 00:24:44.449684 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:44.450851 kubelet[2737]: E1124 00:24:44.450808 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe" Nov 24 00:24:45.365145 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Nov 24 00:24:45.429238 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:45.430855 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:45.435728 systemd-logind[1539]: New session 13 of user core. Nov 24 00:24:45.446125 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:24:45.558084 sshd[5034]: Connection closed by 10.0.0.1 port 46648 Nov 24 00:24:45.558429 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:45.562845 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:46648.service: Deactivated successfully. Nov 24 00:24:45.565318 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:24:45.566251 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:24:45.567784 systemd-logind[1539]: Removed session 13. Nov 24 00:24:46.819723 containerd[1577]: time="2025-11-24T00:24:46.819466474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:24:47.319402 containerd[1577]: time="2025-11-24T00:24:47.319349637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:47.445093 containerd[1577]: time="2025-11-24T00:24:47.445017268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:47.445093 containerd[1577]: time="2025-11-24T00:24:47.445062455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:24:47.445344 kubelet[2737]: E1124 00:24:47.445286 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:24:47.445745 kubelet[2737]: E1124 00:24:47.445346 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:24:47.445745 kubelet[2737]: E1124 00:24:47.445615 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfdvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ds9ng_calico-system(0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:47.445900 containerd[1577]: time="2025-11-24T00:24:47.445791450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:24:47.447250 kubelet[2737]: E1124 00:24:47.447212 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:24:47.992012 containerd[1577]: time="2025-11-24T00:24:47.991899652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:47.993270 containerd[1577]: time="2025-11-24T00:24:47.993227500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:24:47.993315 containerd[1577]: time="2025-11-24T00:24:47.993291865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:24:47.993490 kubelet[2737]: E1124 00:24:47.993419 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:24:47.993490 kubelet[2737]: E1124 00:24:47.993465 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:24:47.993706 kubelet[2737]: E1124 00:24:47.993660 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:47.993933 containerd[1577]: time="2025-11-24T00:24:47.993796668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:24:48.336123 containerd[1577]: time="2025-11-24T00:24:48.336041712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:48.337411 containerd[1577]: time="2025-11-24T00:24:48.337366333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:24:48.337476 containerd[1577]: time="2025-11-24T00:24:48.337456035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:24:48.337621 kubelet[2737]: E1124 00:24:48.337578 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:48.337663 kubelet[2737]: E1124 00:24:48.337634 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:24:48.337911 kubelet[2737]: E1124 00:24:48.337850 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w2k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-nq647_calico-apiserver(daf1acba-9d5d-4e4f-ade2-53de43ab5a20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:48.338322 containerd[1577]: time="2025-11-24T00:24:48.338294050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:24:48.339449 kubelet[2737]: E1124 00:24:48.339401 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:48.650816 containerd[1577]: time="2025-11-24T00:24:48.650660085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:24:48.801020 containerd[1577]: time="2025-11-24T00:24:48.800916028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:24:48.801196 containerd[1577]: time="2025-11-24T00:24:48.800958600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:24:48.801225 kubelet[2737]: E1124 00:24:48.801193 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:24:48.801552 kubelet[2737]: E1124 00:24:48.801236 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:24:48.801552 kubelet[2737]: E1124 00:24:48.801350 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:24:48.802545 kubelet[2737]: E1124 00:24:48.802509 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:24:50.572658 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:58074.service - OpenSSH per-connection server daemon (10.0.0.1:58074). Nov 24 00:24:50.626797 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 58074 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:50.627128 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:50.632038 systemd-logind[1539]: New session 14 of user core. Nov 24 00:24:50.638096 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:24:50.747864 sshd[5058]: Connection closed by 10.0.0.1 port 58074 Nov 24 00:24:50.748196 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:50.752052 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:58074.service: Deactivated successfully. Nov 24 00:24:50.753996 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:24:50.754871 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:24:50.756054 systemd-logind[1539]: Removed session 14. Nov 24 00:24:55.766810 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:58084.service - OpenSSH per-connection server daemon (10.0.0.1:58084). Nov 24 00:24:55.820212 kubelet[2737]: E1124 00:24:55.819582 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:24:55.822225 kubelet[2737]: E1124 00:24:55.822172 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe" Nov 24 00:24:55.838471 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 58084 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:24:55.840997 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:24:55.848066 systemd-logind[1539]: New session 15 of user core. Nov 24 00:24:55.856090 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:24:55.978003 sshd[5079]: Connection closed by 10.0.0.1 port 58084 Nov 24 00:24:55.978418 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Nov 24 00:24:55.982861 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:58084.service: Deactivated successfully. Nov 24 00:24:55.985136 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:24:55.986097 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:24:55.987347 systemd-logind[1539]: Removed session 15. Nov 24 00:24:56.819627 kubelet[2737]: E1124 00:24:56.819563 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:24:59.819416 kubelet[2737]: E1124 00:24:59.819365 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:24:59.820091 kubelet[2737]: E1124 00:24:59.819872 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:25:00.819791 kubelet[2737]: E1124 00:25:00.819739 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:25:01.000011 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:48736.service - OpenSSH per-connection server daemon (10.0.0.1:48736). Nov 24 00:25:01.065379 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:01.066888 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:01.071483 systemd-logind[1539]: New session 16 of user core. Nov 24 00:25:01.085098 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:25:01.214230 sshd[5096]: Connection closed by 10.0.0.1 port 48736 Nov 24 00:25:01.214680 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:01.225817 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:48736.service: Deactivated successfully. Nov 24 00:25:01.227893 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:25:01.228631 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:25:01.231545 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:48746.service - OpenSSH per-connection server daemon (10.0.0.1:48746). Nov 24 00:25:01.232261 systemd-logind[1539]: Removed session 16. Nov 24 00:25:01.283856 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 48746 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:01.285564 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:01.289977 systemd-logind[1539]: New session 17 of user core. Nov 24 00:25:01.303072 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:25:01.604438 sshd[5112]: Connection closed by 10.0.0.1 port 48746 Nov 24 00:25:01.604874 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:01.616626 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:48746.service: Deactivated successfully. Nov 24 00:25:01.619378 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:25:01.620281 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:25:01.624072 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:48752.service - OpenSSH per-connection server daemon (10.0.0.1:48752). Nov 24 00:25:01.624702 systemd-logind[1539]: Removed session 17. Nov 24 00:25:01.685965 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 48752 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:01.687363 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:01.692328 systemd-logind[1539]: New session 18 of user core. Nov 24 00:25:01.703081 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:25:01.820215 kubelet[2737]: E1124 00:25:01.820135 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:25:02.221210 sshd[5127]: Connection closed by 10.0.0.1 port 48752 Nov 24 00:25:02.221565 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:02.240945 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:48752.service: Deactivated successfully. Nov 24 00:25:02.243138 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:25:02.244124 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:25:02.248007 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:48764.service - OpenSSH per-connection server daemon (10.0.0.1:48764). Nov 24 00:25:02.248676 systemd-logind[1539]: Removed session 18. Nov 24 00:25:02.304541 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 48764 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:02.306383 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:02.310982 systemd-logind[1539]: New session 19 of user core. Nov 24 00:25:02.328069 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:25:02.551964 sshd[5149]: Connection closed by 10.0.0.1 port 48764 Nov 24 00:25:02.553163 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:02.563233 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:48764.service: Deactivated successfully. Nov 24 00:25:02.565692 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:25:02.566590 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:25:02.569628 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:48780.service - OpenSSH per-connection server daemon (10.0.0.1:48780). Nov 24 00:25:02.570531 systemd-logind[1539]: Removed session 19. Nov 24 00:25:02.633791 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 48780 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:02.635162 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:02.639711 systemd-logind[1539]: New session 20 of user core. Nov 24 00:25:02.654054 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:25:02.760307 sshd[5164]: Connection closed by 10.0.0.1 port 48780 Nov 24 00:25:02.760670 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:02.765625 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:48780.service: Deactivated successfully. Nov 24 00:25:02.767748 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:25:02.768602 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:25:02.769700 systemd-logind[1539]: Removed session 20. Nov 24 00:25:06.821459 containerd[1577]: time="2025-11-24T00:25:06.821403355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:25:07.159080 containerd[1577]: time="2025-11-24T00:25:07.158864015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:07.160029 containerd[1577]: time="2025-11-24T00:25:07.159998478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:25:07.160147 containerd[1577]: time="2025-11-24T00:25:07.160098889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:25:07.160253 kubelet[2737]: E1124 00:25:07.160210 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:25:07.160611 kubelet[2737]: E1124 00:25:07.160268 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:25:07.160611 kubelet[2737]: E1124 00:25:07.160399 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:efeb101ec7c04178bfd3e17e39617b7e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:07.162443 containerd[1577]: time="2025-11-24T00:25:07.162377592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:25:07.464222 containerd[1577]: time="2025-11-24T00:25:07.464072927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:07.465200 containerd[1577]: time="2025-11-24T00:25:07.465142546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:25:07.465200 containerd[1577]: time="2025-11-24T00:25:07.465192141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:25:07.465414 kubelet[2737]: E1124 00:25:07.465355 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:25:07.465456 kubelet[2737]: E1124 00:25:07.465410 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:25:07.465577 kubelet[2737]: E1124 00:25:07.465539 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sxm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cff675d48-d76lw_calico-system(5e0a34e6-44a3-40cc-a842-058cfa585cfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:07.466886 kubelet[2737]: E1124 00:25:07.466822 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe" Nov 24 00:25:07.774322 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:48784.service - OpenSSH per-connection server daemon (10.0.0.1:48784). Nov 24 00:25:07.834034 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 48784 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:07.835741 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:07.840802 systemd-logind[1539]: New session 21 of user core. Nov 24 00:25:07.850064 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:25:07.977138 sshd[5207]: Connection closed by 10.0.0.1 port 48784 Nov 24 00:25:07.977568 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:07.982117 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:48784.service: Deactivated successfully. Nov 24 00:25:07.984318 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:25:07.985160 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:25:07.986591 systemd-logind[1539]: Removed session 21. Nov 24 00:25:08.839949 containerd[1577]: time="2025-11-24T00:25:08.839883299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:25:09.370948 containerd[1577]: time="2025-11-24T00:25:09.370890649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:09.372159 containerd[1577]: time="2025-11-24T00:25:09.372112086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:25:09.372240 containerd[1577]: time="2025-11-24T00:25:09.372141171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:25:09.372554 kubelet[2737]: E1124 00:25:09.372281 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:25:09.372554 kubelet[2737]: E1124 00:25:09.372323 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:25:09.373312 kubelet[2737]: E1124 00:25:09.373269 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj22n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4597d695-rxcnh_calico-system(7060cdca-9b38-45cc-ad88-a15dcab99e92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:09.374438 kubelet[2737]: E1124 00:25:09.374394 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4597d695-rxcnh" podUID="7060cdca-9b38-45cc-ad88-a15dcab99e92" Nov 24 00:25:10.819653 containerd[1577]: time="2025-11-24T00:25:10.819609046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:25:11.176997 containerd[1577]: time="2025-11-24T00:25:11.176832968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:11.178186 containerd[1577]: time="2025-11-24T00:25:11.178124966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:25:11.178186 containerd[1577]: time="2025-11-24T00:25:11.178161766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:11.178352 kubelet[2737]: E1124 00:25:11.178282 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:11.178352 kubelet[2737]: E1124 00:25:11.178325 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:11.178722 kubelet[2737]: E1124 00:25:11.178458 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llxvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-hffj2_calico-apiserver(326ce385-d248-47a6-abc0-0a06e5c39a9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:11.179614 kubelet[2737]: E1124 00:25:11.179583 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-hffj2" podUID="326ce385-d248-47a6-abc0-0a06e5c39a9a" Nov 24 00:25:11.820825 kubelet[2737]: E1124 00:25:11.820755 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:25:13.000176 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:56512.service - OpenSSH per-connection server daemon (10.0.0.1:56512). Nov 24 00:25:13.086452 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 56512 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:13.088643 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:13.094132 systemd-logind[1539]: New session 22 of user core. Nov 24 00:25:13.104062 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:25:13.240090 sshd[5229]: Connection closed by 10.0.0.1 port 56512 Nov 24 00:25:13.240492 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:13.245704 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:56512.service: Deactivated successfully. Nov 24 00:25:13.248849 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:25:13.250357 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:25:13.252569 systemd-logind[1539]: Removed session 22. Nov 24 00:25:14.820394 containerd[1577]: time="2025-11-24T00:25:14.820288459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:25:15.207749 containerd[1577]: time="2025-11-24T00:25:15.207577146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:15.339873 containerd[1577]: time="2025-11-24T00:25:15.339793470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:25:15.339873 containerd[1577]: time="2025-11-24T00:25:15.339846500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:15.340149 kubelet[2737]: E1124 00:25:15.340093 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:15.340617 kubelet[2737]: E1124 00:25:15.340162 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:25:15.340688 kubelet[2737]: E1124 00:25:15.340600 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w2k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76968cf4d5-nq647_calico-apiserver(daf1acba-9d5d-4e4f-ade2-53de43ab5a20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:15.340861 containerd[1577]: time="2025-11-24T00:25:15.340678020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:25:15.342146 kubelet[2737]: E1124 00:25:15.342099 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76968cf4d5-nq647" podUID="daf1acba-9d5d-4e4f-ade2-53de43ab5a20" Nov 24 00:25:15.775965 containerd[1577]: time="2025-11-24T00:25:15.775798077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:15.776941 containerd[1577]: time="2025-11-24T00:25:15.776878851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:25:15.777083 containerd[1577]: time="2025-11-24T00:25:15.776975745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:25:15.777234 kubelet[2737]: E1124 00:25:15.777164 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:25:15.777289 kubelet[2737]: E1124 00:25:15.777249 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:25:15.777492 kubelet[2737]: E1124 00:25:15.777419 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfdvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ds9ng_calico-system(0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:15.778682 kubelet[2737]: E1124 00:25:15.778613 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ds9ng" podUID="0db5d7d1-b0e4-4b2b-9a4d-5b198091ab3c" Nov 24 00:25:15.820172 containerd[1577]: time="2025-11-24T00:25:15.820115815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:25:16.182208 containerd[1577]: time="2025-11-24T00:25:16.182149571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:16.183331 containerd[1577]: time="2025-11-24T00:25:16.183285540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:25:16.183411 containerd[1577]: time="2025-11-24T00:25:16.183362726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:25:16.183576 kubelet[2737]: E1124 00:25:16.183526 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:25:16.183635 kubelet[2737]: E1124 00:25:16.183579 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:25:16.183745 kubelet[2737]: E1124 00:25:16.183706 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:16.185526 containerd[1577]: time="2025-11-24T00:25:16.185501991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:25:16.528478 containerd[1577]: time="2025-11-24T00:25:16.528326556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:25:16.529517 containerd[1577]: time="2025-11-24T00:25:16.529477904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:25:16.529628 containerd[1577]: time="2025-11-24T00:25:16.529518772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:25:16.529787 kubelet[2737]: E1124 00:25:16.529727 2737 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:25:16.530212 kubelet[2737]: E1124 00:25:16.529788 2737 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:25:16.530212 kubelet[2737]: E1124 00:25:16.529972 2737 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljg47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hmjpm_calico-system(50e0737c-0da4-4ca3-bede-949a700e86ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:25:16.531245 kubelet[2737]: E1124 00:25:16.531192 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hmjpm" podUID="50e0737c-0da4-4ca3-bede-949a700e86ed" Nov 24 00:25:18.254142 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:56520.service - OpenSSH per-connection server daemon (10.0.0.1:56520). Nov 24 00:25:18.316243 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 56520 ssh2: RSA SHA256:BLlmoJVEAwNVcsQWPOPwU0WJtaKUh0hefjY8k+s4MOA Nov 24 00:25:18.317482 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:25:18.322084 systemd-logind[1539]: New session 23 of user core. Nov 24 00:25:18.332078 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:25:18.451718 sshd[5247]: Connection closed by 10.0.0.1 port 56520 Nov 24 00:25:18.453613 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Nov 24 00:25:18.458852 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:56520.service: Deactivated successfully. Nov 24 00:25:18.461047 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:25:18.461822 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:25:18.463287 systemd-logind[1539]: Removed session 23. Nov 24 00:25:18.555328 update_engine[1540]: I20251124 00:25:18.555260 1540 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 24 00:25:18.555328 update_engine[1540]: I20251124 00:25:18.555316 1540 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 24 00:25:18.557050 update_engine[1540]: I20251124 00:25:18.556991 1540 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 24 00:25:18.557725 update_engine[1540]: I20251124 00:25:18.557689 1540 omaha_request_params.cc:62] Current group set to beta Nov 24 00:25:18.557827 update_engine[1540]: I20251124 00:25:18.557807 1540 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 24 00:25:18.557827 update_engine[1540]: I20251124 00:25:18.557819 1540 update_attempter.cc:643] Scheduling an action processor start. Nov 24 00:25:18.557890 update_engine[1540]: I20251124 00:25:18.557842 1540 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 24 00:25:18.557933 update_engine[1540]: I20251124 00:25:18.557895 1540 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 24 00:25:18.558010 update_engine[1540]: I20251124 00:25:18.557987 1540 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 24 00:25:18.558010 update_engine[1540]: I20251124 00:25:18.558002 1540 omaha_request_action.cc:272] Request: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558010 update_engine[1540]: Nov 24 00:25:18.558304 update_engine[1540]: I20251124 00:25:18.558010 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:25:18.566807 locksmithd[1584]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 24 00:25:18.568756 update_engine[1540]: I20251124 00:25:18.568711 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:25:18.569435 update_engine[1540]: I20251124 00:25:18.569376 1540 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:25:18.577095 update_engine[1540]: E20251124 00:25:18.577032 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:25:18.577095 update_engine[1540]: I20251124 00:25:18.577102 1540 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 24 00:25:19.818756 kubelet[2737]: E1124 00:25:19.818306 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 24 00:25:19.827581 kubelet[2737]: E1124 00:25:19.827502 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cff675d48-d76lw" podUID="5e0a34e6-44a3-40cc-a842-058cfa585cfe"