Dec 12 18:38:20.935212 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:38:20.935239 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:20.935248 kernel: BIOS-provided physical RAM map: Dec 12 18:38:20.935255 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:38:20.935261 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:38:20.935267 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:38:20.935276 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:38:20.935282 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:38:20.935289 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:38:20.935295 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:38:20.935302 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:38:20.935308 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:38:20.935314 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:38:20.935320 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:38:20.935329 kernel: NX (Execute Disable) protection: active Dec 12 18:38:20.935336 kernel: APIC: Static calls initialized Dec 12 18:38:20.935342 kernel: SMBIOS 2.8 present. Dec 12 18:38:20.935349 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:38:20.935355 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:38:20.935362 kernel: Hypervisor detected: KVM Dec 12 18:38:20.935370 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:38:20.935377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:38:20.935383 kernel: kvm-clock: using sched offset of 6957681980 cycles Dec 12 18:38:20.935390 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:38:20.935397 kernel: tsc: Detected 2000.000 MHz processor Dec 12 18:38:20.935403 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:38:20.935410 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:38:20.935417 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:38:20.935424 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:38:20.935431 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:38:20.935439 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:38:20.935446 kernel: Using GB pages for direct mapping Dec 12 18:38:20.935453 kernel: ACPI: Early table checksum verification disabled Dec 12 18:38:20.936096 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:38:20.936106 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936114 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936121 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936127 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:38:20.936134 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936145 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936156 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936163 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:20.936170 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:38:20.936177 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:38:20.936186 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:38:20.936193 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:38:20.936200 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:38:20.936207 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:38:20.936214 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:38:20.936220 kernel: No NUMA configuration found Dec 12 18:38:20.936227 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:38:20.936234 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Dec 12 18:38:20.936242 kernel: Zone ranges: Dec 12 18:38:20.936251 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:38:20.936258 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:38:20.936265 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:38:20.936273 kernel: Device empty Dec 12 18:38:20.936285 kernel: Movable zone start for each node Dec 12 18:38:20.936296 kernel: Early memory node ranges Dec 12 18:38:20.936304 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:38:20.936310 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:38:20.936318 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:38:20.936324 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:38:20.936334 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:38:20.936341 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:38:20.936348 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:38:20.936355 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:38:20.936362 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:38:20.936369 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:38:20.936376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:38:20.936382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:38:20.936389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:38:20.936398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:38:20.936405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:38:20.936413 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:38:20.936419 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:38:20.936426 kernel: TSC deadline timer available Dec 12 18:38:20.936433 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:38:20.936440 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:38:20.936447 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:38:20.936454 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:38:20.936468 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:38:20.936476 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:38:20.936483 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:38:20.936490 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:38:20.936497 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:38:20.936504 kernel: kvm-guest: setup PV sched yield Dec 12 18:38:20.936511 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:38:20.936518 kernel: Booting paravirtualized kernel on KVM Dec 12 18:38:20.936525 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:38:20.936534 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:38:20.936541 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:38:20.936548 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:38:20.936555 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:38:20.936561 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:38:20.936568 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:38:20.936577 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:20.936584 kernel: random: crng init done Dec 12 18:38:20.936593 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:38:20.936601 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:38:20.936607 kernel: Fallback order for Node 0: 0 Dec 12 18:38:20.936614 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:38:20.936621 kernel: Policy zone: Normal Dec 12 18:38:20.936628 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:38:20.936635 kernel: software IO TLB: area num 2. Dec 12 18:38:20.936642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:38:20.936649 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:38:20.936658 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:38:20.936680 kernel: Dynamic Preempt: voluntary Dec 12 18:38:20.936727 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:38:20.936735 kernel: rcu: RCU event tracing is enabled. Dec 12 18:38:20.936743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:38:20.936764 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:38:20.936771 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:38:20.936778 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:38:20.936785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:38:20.936792 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:38:20.936803 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:20.936817 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:20.936827 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:20.936835 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:38:20.936842 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:38:20.936850 kernel: Console: colour VGA+ 80x25 Dec 12 18:38:20.936857 kernel: printk: legacy console [tty0] enabled Dec 12 18:38:20.936864 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:38:20.936871 kernel: ACPI: Core revision 20240827 Dec 12 18:38:20.936881 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:38:20.936888 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:38:20.936896 kernel: x2apic enabled Dec 12 18:38:20.936903 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:38:20.936911 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:38:20.936918 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:38:20.936925 kernel: kvm-guest: setup PV IPIs Dec 12 18:38:20.936935 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:38:20.936942 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:38:20.936949 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 12 18:38:20.936957 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:38:20.936964 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:38:20.936971 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:38:20.936979 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:38:20.936986 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:38:20.936993 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:38:20.937003 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:38:20.937010 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:38:20.938745 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:38:20.938758 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:38:20.938767 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:38:20.938775 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:38:20.938783 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:38:20.938790 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:38:20.938801 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:38:20.938809 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:38:20.938816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:38:20.938823 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:38:20.938831 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:38:20.938838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:38:20.938845 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:38:20.938852 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:38:20.938860 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:38:20.938869 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:38:20.938876 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:38:20.938883 kernel: landlock: Up and running. Dec 12 18:38:20.938891 kernel: SELinux: Initializing. Dec 12 18:38:20.938898 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:38:20.938906 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:38:20.938913 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:38:20.938920 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:38:20.938927 kernel: ... version: 0 Dec 12 18:38:20.938937 kernel: ... bit width: 48 Dec 12 18:38:20.938944 kernel: ... generic registers: 6 Dec 12 18:38:20.938951 kernel: ... value mask: 0000ffffffffffff Dec 12 18:38:20.938958 kernel: ... max period: 00007fffffffffff Dec 12 18:38:20.938965 kernel: ... fixed-purpose events: 0 Dec 12 18:38:20.938972 kernel: ... event mask: 000000000000003f Dec 12 18:38:20.938980 kernel: signal: max sigframe size: 3376 Dec 12 18:38:20.938987 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:38:20.938995 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:38:20.939006 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:38:20.939013 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:38:20.939020 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:38:20.939027 kernel: .... node #0, CPUs: #1 Dec 12 18:38:20.939034 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:38:20.939041 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:38:20.939049 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235480K reserved, 0K cma-reserved) Dec 12 18:38:20.939056 kernel: devtmpfs: initialized Dec 12 18:38:20.939063 kernel: x86/mm: Memory block size: 128MB Dec 12 18:38:20.939072 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:38:20.939080 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:38:20.939087 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:38:20.939094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:38:20.939101 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:38:20.939108 kernel: audit: type=2000 audit(1765564698.138:1): state=initialized audit_enabled=0 res=1 Dec 12 18:38:20.939115 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:38:20.939122 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:38:20.939129 kernel: cpuidle: using governor menu Dec 12 18:38:20.939139 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:38:20.939146 kernel: dca service started, version 1.12.1 Dec 12 18:38:20.939153 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:38:20.939160 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:38:20.939167 kernel: PCI: Using configuration type 1 for base access Dec 12 18:38:20.939175 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:38:20.939182 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:38:20.939189 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:38:20.939196 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:38:20.939206 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:38:20.939213 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:38:20.939220 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:38:20.939227 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:38:20.939234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:38:20.939241 kernel: ACPI: Interpreter enabled Dec 12 18:38:20.939248 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:38:20.939255 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:38:20.939262 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:38:20.939272 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:38:20.939279 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:38:20.939286 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:38:20.939502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:38:20.939634 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:38:20.940811 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:38:20.940825 kernel: PCI host bridge to bus 0000:00 Dec 12 18:38:20.940972 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:38:20.941097 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:38:20.941215 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:38:20.941326 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:38:20.941436 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:38:20.941546 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:38:20.941656 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:38:20.942006 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:38:20.942157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:38:20.942289 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:38:20.942426 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:38:20.942549 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:38:20.942670 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:38:20.943861 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:20.944001 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:38:20.944125 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:38:20.944246 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:38:20.944381 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:38:20.944503 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:38:20.944623 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:38:20.944792 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:38:20.944918 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:38:20.945053 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:38:20.945175 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:38:20.945330 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:38:20.945477 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:38:20.945928 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:38:20.946073 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:38:20.946203 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:38:20.946214 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:38:20.946222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:38:20.946229 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:38:20.946240 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:38:20.946252 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:38:20.946263 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:38:20.946280 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:38:20.946287 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:38:20.946296 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:38:20.946307 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:38:20.946319 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:38:20.946331 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:38:20.946342 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:38:20.946349 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:38:20.946357 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:38:20.946371 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:38:20.946381 kernel: iommu: Default domain type: Translated Dec 12 18:38:20.946388 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:38:20.946395 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:38:20.946403 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:38:20.946410 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:38:20.946417 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:38:20.946548 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:38:20.946674 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:38:20.946816 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:38:20.946827 kernel: vgaarb: loaded Dec 12 18:38:20.946835 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:38:20.946842 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:38:20.946850 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:38:20.946857 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:38:20.946865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:38:20.946872 kernel: pnp: PnP ACPI init Dec 12 18:38:20.947260 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:38:20.947275 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:38:20.947287 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:38:20.947300 kernel: NET: Registered PF_INET protocol family Dec 12 18:38:20.947312 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:38:20.947319 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:38:20.947327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:38:20.947335 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:38:20.947346 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:38:20.947354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:38:20.947362 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:38:20.947369 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:38:20.947376 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:38:20.947383 kernel: NET: Registered PF_XDP protocol family Dec 12 18:38:20.947700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:38:20.947848 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:38:20.947961 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:38:20.948077 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:38:20.948187 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:38:20.948298 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:38:20.948307 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:38:20.948315 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:38:20.948322 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:38:20.948330 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:38:20.948337 kernel: Initialise system trusted keyrings Dec 12 18:38:20.948347 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:38:20.948354 kernel: Key type asymmetric registered Dec 12 18:38:20.948361 kernel: Asymmetric key parser 'x509' registered Dec 12 18:38:20.948369 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:38:20.948376 kernel: io scheduler mq-deadline registered Dec 12 18:38:20.948383 kernel: io scheduler kyber registered Dec 12 18:38:20.948390 kernel: io scheduler bfq registered Dec 12 18:38:20.948397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:38:20.948405 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:38:20.948415 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:38:20.948422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:38:20.948430 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:38:20.948437 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:38:20.948444 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:38:20.948451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:38:20.948459 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:38:20.948616 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:38:20.948756 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:38:20.948879 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:38:20 UTC (1765564700) Dec 12 18:38:20.948994 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:38:20.949003 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:38:20.949011 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:38:20.949018 kernel: Segment Routing with IPv6 Dec 12 18:38:20.949026 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:38:20.949033 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:38:20.949040 kernel: Key type dns_resolver registered Dec 12 18:38:20.949051 kernel: IPI shorthand broadcast: enabled Dec 12 18:38:20.949058 kernel: sched_clock: Marking stable (2841004850, 340762340)->(3265574770, -83807580) Dec 12 18:38:20.949065 kernel: registered taskstats version 1 Dec 12 18:38:20.949074 kernel: Loading compiled-in X.509 certificates Dec 12 18:38:20.949086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:38:20.949096 kernel: Demotion targets for Node 0: null Dec 12 18:38:20.949103 kernel: Key type .fscrypt registered Dec 12 18:38:20.949110 kernel: Key type fscrypt-provisioning registered Dec 12 18:38:20.949118 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:38:20.949128 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:38:20.949135 kernel: ima: No architecture policies found Dec 12 18:38:20.951443 kernel: clk: Disabling unused clocks Dec 12 18:38:20.951460 kernel: Warning: unable to open an initial console. Dec 12 18:38:20.951470 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:38:20.951479 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:38:20.951486 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:38:20.951494 kernel: Run /init as init process Dec 12 18:38:20.951501 kernel: with arguments: Dec 12 18:38:20.951513 kernel: /init Dec 12 18:38:20.951521 kernel: with environment: Dec 12 18:38:20.951545 kernel: HOME=/ Dec 12 18:38:20.951555 kernel: TERM=linux Dec 12 18:38:20.951564 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:38:20.951574 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:20.951583 systemd[1]: Detected virtualization kvm. Dec 12 18:38:20.951594 systemd[1]: Detected architecture x86-64. Dec 12 18:38:20.951601 systemd[1]: Running in initrd. Dec 12 18:38:20.951609 systemd[1]: No hostname configured, using default hostname. Dec 12 18:38:20.951617 systemd[1]: Hostname set to . Dec 12 18:38:20.951625 systemd[1]: Initializing machine ID from random generator. Dec 12 18:38:20.951633 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:38:20.951641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:20.951650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:20.951660 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:38:20.951669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:20.951677 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:38:20.951685 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:38:20.951694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:38:20.951702 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:38:20.951727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:20.951739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:20.951747 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:20.951755 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:20.951763 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:20.951771 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:20.951779 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:20.951787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:20.951795 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:38:20.951803 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:38:20.951814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:20.951825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:20.951835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:20.951843 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:20.951851 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:38:20.951862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:20.951870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:38:20.951879 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:38:20.951887 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:38:20.951895 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:20.951904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:20.951912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:20.951920 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:20.951931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:20.951969 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:38:20.951992 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:38:20.952001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:20.952011 systemd-journald[187]: Journal started Dec 12 18:38:20.952029 systemd-journald[187]: Runtime Journal (/run/log/journal/e6e14eae35b74cd9942dda053de1ec38) is 8M, max 78.2M, 70.2M free. Dec 12 18:38:20.935581 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:38:20.969748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:38:20.970484 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:38:21.068470 kernel: Bridge firewalling registered Dec 12 18:38:21.068498 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:21.070064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:21.071132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:21.072735 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:21.078422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:38:21.081032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:21.083845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:21.090838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:21.102534 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:38:21.107515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:21.110778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:21.115274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:21.120919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:21.122884 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:21.125854 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:38:21.152799 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:21.166133 systemd-resolved[225]: Positive Trust Anchors: Dec 12 18:38:21.167013 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:21.167043 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:21.172769 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 12 18:38:21.173864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:21.175005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:21.236752 kernel: SCSI subsystem initialized Dec 12 18:38:21.246734 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:38:21.256736 kernel: iscsi: registered transport (tcp) Dec 12 18:38:21.276093 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:38:21.276127 kernel: QLogic iSCSI HBA Driver Dec 12 18:38:21.295991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:21.309455 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:21.311969 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:21.356663 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:21.359519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:38:21.405738 kernel: raid6: avx2x4 gen() 30867 MB/s Dec 12 18:38:21.423731 kernel: raid6: avx2x2 gen() 29676 MB/s Dec 12 18:38:21.441790 kernel: raid6: avx2x1 gen() 21707 MB/s Dec 12 18:38:21.441806 kernel: raid6: using algorithm avx2x4 gen() 30867 MB/s Dec 12 18:38:21.463913 kernel: raid6: .... xor() 4421 MB/s, rmw enabled Dec 12 18:38:21.463940 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:38:21.483755 kernel: xor: automatically using best checksumming function avx Dec 12 18:38:21.620757 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:38:21.628212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:21.630431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:21.656735 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 12 18:38:21.662448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:21.665844 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:38:21.688700 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Dec 12 18:38:21.713759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:21.716109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:21.794771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:21.797857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:38:21.855743 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:38:21.861741 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:38:21.881764 kernel: AES CTR mode by8 optimization enabled Dec 12 18:38:21.892785 kernel: libata version 3.00 loaded. Dec 12 18:38:21.895764 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:38:21.901620 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:38:21.911083 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:38:21.928933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:21.929052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:22.076743 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:38:22.076978 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:38:22.076998 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:38:22.077168 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:38:22.077314 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:38:21.933118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:21.935581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:21.943989 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:22.084737 kernel: scsi host1: ahci Dec 12 18:38:22.086735 kernel: scsi host2: ahci Dec 12 18:38:22.088735 kernel: scsi host3: ahci Dec 12 18:38:22.090734 kernel: scsi host4: ahci Dec 12 18:38:22.092768 kernel: scsi host5: ahci Dec 12 18:38:22.094256 kernel: scsi host6: ahci Dec 12 18:38:22.106957 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Dec 12 18:38:22.106993 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Dec 12 18:38:22.107004 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 12 18:38:22.107211 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Dec 12 18:38:22.107223 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:38:22.107380 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Dec 12 18:38:22.107392 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 18:38:22.107541 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Dec 12 18:38:22.107552 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:38:22.107698 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Dec 12 18:38:22.116841 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:38:22.135531 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:38:22.135558 kernel: GPT:9289727 != 167739391 Dec 12 18:38:22.135570 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:38:22.135580 kernel: GPT:9289727 != 167739391 Dec 12 18:38:22.135589 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:38:22.135599 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:38:22.135609 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 18:38:22.231491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:22.427743 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.427806 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.432739 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.437738 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.437767 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.439734 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:38:22.502272 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:38:22.511975 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:38:22.512995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:22.521277 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:38:22.522082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:38:22.532336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:38:22.534079 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:22.534890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:22.536838 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:22.539445 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:38:22.543812 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:38:22.558019 disk-uuid[615]: Primary Header is updated. Dec 12 18:38:22.558019 disk-uuid[615]: Secondary Entries is updated. Dec 12 18:38:22.558019 disk-uuid[615]: Secondary Header is updated. Dec 12 18:38:22.566352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:22.567284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:38:22.577762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:38:23.583778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:38:23.583904 disk-uuid[618]: The operation has completed successfully. Dec 12 18:38:23.640794 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:38:23.640938 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:38:23.669859 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:38:23.696635 sh[637]: Success Dec 12 18:38:23.715766 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:38:23.715821 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:38:23.718329 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:38:23.731775 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:38:23.779108 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:38:23.783704 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:38:23.796230 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:38:23.809244 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (649) Dec 12 18:38:23.809317 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:38:23.814820 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:23.825444 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:38:23.825480 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:38:23.825493 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:38:23.829829 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:38:23.832392 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:23.834666 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:38:23.835907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:38:23.839072 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:38:23.870339 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Dec 12 18:38:23.870374 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:23.873771 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:23.883948 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:38:23.883973 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:38:23.883985 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:38:23.895155 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:23.895398 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:38:23.899875 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:38:23.970417 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:23.973812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:24.031444 ignition[747]: Ignition 2.22.0 Dec 12 18:38:24.032410 ignition[747]: Stage: fetch-offline Dec 12 18:38:24.032461 ignition[747]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:24.036065 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:24.032473 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:24.032578 ignition[747]: parsed url from cmdline: "" Dec 12 18:38:24.032583 ignition[747]: no config URL provided Dec 12 18:38:24.032590 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:24.032599 ignition[747]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:24.032605 ignition[747]: failed to fetch config: resource requires networking Dec 12 18:38:24.041324 systemd-networkd[818]: lo: Link UP Dec 12 18:38:24.032806 ignition[747]: Ignition finished successfully Dec 12 18:38:24.041329 systemd-networkd[818]: lo: Gained carrier Dec 12 18:38:24.042940 systemd-networkd[818]: Enumeration completed Dec 12 18:38:24.043061 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:24.043476 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:24.043481 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:38:24.045407 systemd-networkd[818]: eth0: Link UP Dec 12 18:38:24.045645 systemd[1]: Reached target network.target - Network. Dec 12 18:38:24.045911 systemd-networkd[818]: eth0: Gained carrier Dec 12 18:38:24.045922 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:24.050073 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:38:24.078879 ignition[826]: Ignition 2.22.0 Dec 12 18:38:24.078895 ignition[826]: Stage: fetch Dec 12 18:38:24.079206 ignition[826]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:24.079217 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:24.079289 ignition[826]: parsed url from cmdline: "" Dec 12 18:38:24.079293 ignition[826]: no config URL provided Dec 12 18:38:24.079299 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:24.079307 ignition[826]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:24.079340 ignition[826]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:38:24.079482 ignition[826]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:38:24.280018 ignition[826]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:38:24.280199 ignition[826]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:38:24.680687 ignition[826]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:38:24.680885 ignition[826]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:38:24.784791 systemd-networkd[818]: eth0: DHCPv4 address 172.237.133.204/24, gateway 172.237.133.1 acquired from 23.192.120.14 Dec 12 18:38:25.186956 systemd-networkd[818]: eth0: Gained IPv6LL Dec 12 18:38:25.481772 ignition[826]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:38:25.573377 ignition[826]: PUT result: OK Dec 12 18:38:25.573428 ignition[826]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:38:25.686369 ignition[826]: GET result: OK Dec 12 18:38:25.686474 ignition[826]: parsing config with SHA512: 68b22404fbc4001fa5e36b2b167d0beebcaf82a8dad4d8c6b74b33c79bb35b323fd13ba35ce21a2b97cb15da89cda8e2d1020d5117b62d9878569967c7bf6f55 Dec 12 18:38:25.691163 unknown[826]: fetched base config from "system" Dec 12 18:38:25.691498 ignition[826]: fetch: fetch complete Dec 12 18:38:25.691170 unknown[826]: fetched base config from "system" Dec 12 18:38:25.691503 ignition[826]: fetch: fetch passed Dec 12 18:38:25.691181 unknown[826]: fetched user config from "akamai" Dec 12 18:38:25.691546 ignition[826]: Ignition finished successfully Dec 12 18:38:25.695132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:38:25.699855 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:38:25.726970 ignition[834]: Ignition 2.22.0 Dec 12 18:38:25.726992 ignition[834]: Stage: kargs Dec 12 18:38:25.727106 ignition[834]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:25.729611 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:38:25.727117 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:25.727641 ignition[834]: kargs: kargs passed Dec 12 18:38:25.727679 ignition[834]: Ignition finished successfully Dec 12 18:38:25.732860 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:38:25.754584 ignition[841]: Ignition 2.22.0 Dec 12 18:38:25.754598 ignition[841]: Stage: disks Dec 12 18:38:25.754738 ignition[841]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:25.754749 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:25.755412 ignition[841]: disks: disks passed Dec 12 18:38:25.758933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:38:25.755451 ignition[841]: Ignition finished successfully Dec 12 18:38:25.761531 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:25.762373 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:38:25.763757 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:25.765318 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:25.766913 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:25.769143 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:38:25.799248 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:38:25.802654 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:38:25.805804 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:38:25.908744 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:38:25.908946 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:38:25.910018 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:25.912201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:25.915784 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:38:25.916762 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:38:25.916804 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:38:25.916826 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:25.925254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:38:25.929817 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:38:25.945420 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (857) Dec 12 18:38:25.945441 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:25.945452 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:25.945462 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:38:25.945472 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:38:25.945482 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:38:25.947292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:25.986366 initrd-setup-root[881]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:38:25.991512 initrd-setup-root[888]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:38:25.996090 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:38:26.000828 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:38:26.085302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:26.087532 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:38:26.089616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:38:26.107120 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:38:26.110797 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:26.125301 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:38:26.135236 ignition[971]: INFO : Ignition 2.22.0 Dec 12 18:38:26.135236 ignition[971]: INFO : Stage: mount Dec 12 18:38:26.137991 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:26.137991 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:26.137991 ignition[971]: INFO : mount: mount passed Dec 12 18:38:26.137991 ignition[971]: INFO : Ignition finished successfully Dec 12 18:38:26.138986 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:38:26.141821 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:38:26.910628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:26.940759 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Dec 12 18:38:26.945127 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:26.945149 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:26.954362 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:38:26.954421 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:38:26.954433 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:38:26.958707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:26.990291 ignition[998]: INFO : Ignition 2.22.0 Dec 12 18:38:26.990291 ignition[998]: INFO : Stage: files Dec 12 18:38:26.992165 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:26.992165 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:26.992165 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:38:26.992165 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:38:26.992165 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:38:26.997151 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:38:26.997151 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:38:26.997151 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:38:26.997151 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:26.997151 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:38:26.995451 unknown[998]: wrote ssh authorized keys file for user: core Dec 12 18:38:27.175394 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:38:27.750134 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:27.750134 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:27.753733 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:27.769614 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:27.769614 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:27.769614 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:27.769614 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:27.769614 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:38:28.234750 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:38:28.532810 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:28.532810 ignition[998]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:38:28.535831 ignition[998]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:38:28.543861 ignition[998]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:38:28.543861 ignition[998]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:28.543861 ignition[998]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:28.543861 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:28.543861 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:28.543861 ignition[998]: INFO : files: files passed Dec 12 18:38:28.543861 ignition[998]: INFO : Ignition finished successfully Dec 12 18:38:28.540688 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:38:28.543189 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:38:28.547196 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:38:28.556964 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:38:28.557084 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:38:28.567140 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:28.567140 initrd-setup-root-after-ignition[1028]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:28.569537 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:28.571018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:28.572325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:38:28.574548 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:38:28.617932 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:38:28.618061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:38:28.619800 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:38:28.621180 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:38:28.622858 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:38:28.623579 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:38:28.656936 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:28.659925 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:38:28.677249 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:28.679019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:28.679869 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:38:28.681525 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:38:28.681627 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:28.683431 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:38:28.684464 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:38:28.686119 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:38:28.687589 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:28.689061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:28.690690 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:28.692357 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:38:28.693992 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:28.695649 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:38:28.697264 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:38:28.698903 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:38:28.700466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:38:28.700603 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:28.702364 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:28.703483 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:28.704921 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:38:28.705018 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:28.706563 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:38:28.706696 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:28.708806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:38:28.708915 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:28.710003 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:38:28.710135 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:38:28.713801 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:38:28.721877 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:38:28.723305 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:38:28.723437 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:28.726771 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:38:28.726880 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:28.735978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:38:28.738647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:38:28.751913 ignition[1052]: INFO : Ignition 2.22.0 Dec 12 18:38:28.751913 ignition[1052]: INFO : Stage: umount Dec 12 18:38:28.751913 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:28.751913 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:38:28.751913 ignition[1052]: INFO : umount: umount passed Dec 12 18:38:28.751913 ignition[1052]: INFO : Ignition finished successfully Dec 12 18:38:28.756990 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:38:28.757109 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:38:28.759454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:38:28.762750 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:38:28.762866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:38:28.765124 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:38:28.765177 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:38:28.765963 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:38:28.766014 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:38:28.767395 systemd[1]: Stopped target network.target - Network. Dec 12 18:38:28.768822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:38:28.768878 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:28.770301 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:38:28.771688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:38:28.775750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:28.776545 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:38:28.778226 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:38:28.779735 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:38:28.779782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:28.781159 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:38:28.781202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:28.782566 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:38:28.782621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:38:28.784030 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:38:28.784080 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:28.785703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:38:28.787128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:38:28.788960 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:38:28.789070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:38:28.792787 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:38:28.792901 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:38:28.797926 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:38:28.798160 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:38:28.798280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:38:28.800657 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:38:28.802260 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:38:28.803648 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:38:28.803699 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:28.805304 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:38:28.805360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:28.807520 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:38:28.809470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:38:28.809524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:28.811915 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:38:28.811964 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:28.813911 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:38:28.813962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:28.815931 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:38:28.815982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:28.819267 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:28.823437 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:38:28.823503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:28.834493 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:38:28.835405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:38:28.841133 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:38:28.841325 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:28.843203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:38:28.843275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:28.844449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:38:28.844489 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:28.846044 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:38:28.846095 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:28.848241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:38:28.848289 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:28.849893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:38:28.849949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:28.852824 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:38:28.854088 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:38:28.854143 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:28.856686 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:38:28.856769 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:28.858800 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:38:28.858852 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:28.860274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:38:28.860322 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:28.861752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:28.861807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:28.867019 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:38:28.867083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:38:28.867131 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:38:28.867177 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:28.870368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:38:28.870472 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:38:28.872087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:38:28.874268 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:38:28.889163 systemd[1]: Switching root. Dec 12 18:38:28.929857 systemd-journald[187]: Journal stopped Dec 12 18:38:30.119537 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:38:30.119569 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:38:30.119582 kernel: SELinux: policy capability open_perms=1 Dec 12 18:38:30.119591 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:38:30.119601 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:38:30.119612 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:38:30.119622 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:38:30.119631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:38:30.119640 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:38:30.119649 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:38:30.119659 kernel: audit: type=1403 audit(1765564709.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:38:30.119669 systemd[1]: Successfully loaded SELinux policy in 74.593ms. Dec 12 18:38:30.119681 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.733ms. Dec 12 18:38:30.119693 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:30.119704 systemd[1]: Detected virtualization kvm. Dec 12 18:38:30.120009 systemd[1]: Detected architecture x86-64. Dec 12 18:38:30.120026 systemd[1]: Detected first boot. Dec 12 18:38:30.120038 systemd[1]: Initializing machine ID from random generator. Dec 12 18:38:30.120048 zram_generator::config[1096]: No configuration found. Dec 12 18:38:30.120059 kernel: Guest personality initialized and is inactive Dec 12 18:38:30.120069 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:38:30.120078 kernel: Initialized host personality Dec 12 18:38:30.120088 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:38:30.120098 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:38:30.120112 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:38:30.120123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:38:30.120133 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:38:30.120143 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:38:30.120153 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:38:30.120163 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:38:30.120174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:38:30.120188 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:38:30.120198 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:38:30.120209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:38:30.120219 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:38:30.120229 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:38:30.120240 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:30.120250 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:30.120260 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:38:30.120273 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:38:30.120286 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:38:30.120297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:30.120308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:38:30.120318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:30.120328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:30.120339 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:38:30.120352 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:38:30.120363 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:30.120373 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:38:30.120384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:30.120394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:30.120405 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:30.120416 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:30.120426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:38:30.120436 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:38:30.120449 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:38:30.120460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:30.120470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:30.120481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:30.120493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:38:30.120504 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:38:30.120514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:38:30.120525 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:38:30.120535 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:30.120546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:38:30.120558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:38:30.120568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:38:30.120581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:38:30.120591 systemd[1]: Reached target machines.target - Containers. Dec 12 18:38:30.120602 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:38:30.120612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:30.120623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:30.120633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:38:30.120644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:30.120655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:30.120666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:30.120678 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:38:30.120689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:30.120700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:38:30.123146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:38:30.123164 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:38:30.123176 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:38:30.123187 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:38:30.123198 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:30.123213 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:30.123224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:30.123234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:30.123245 kernel: loop: module loaded Dec 12 18:38:30.123255 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:38:30.123266 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:38:30.123276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:30.123287 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:38:30.123300 kernel: ACPI: bus type drm_connector registered Dec 12 18:38:30.123310 systemd[1]: Stopped verity-setup.service. Dec 12 18:38:30.123320 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:30.123331 kernel: fuse: init (API version 7.41) Dec 12 18:38:30.123341 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:38:30.123351 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:38:30.123362 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:38:30.123397 systemd-journald[1187]: Collecting audit messages is disabled. Dec 12 18:38:30.123422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:38:30.123433 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:38:30.123443 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:38:30.123454 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:38:30.123464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:30.123477 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:38:30.123488 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:38:30.123498 systemd-journald[1187]: Journal started Dec 12 18:38:30.123518 systemd-journald[1187]: Runtime Journal (/run/log/journal/fd9ad8092ed14e31ac45fccbac63e1b0) is 8M, max 78.2M, 70.2M free. Dec 12 18:38:29.709661 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:38:29.722536 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:38:29.723258 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:38:30.128463 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:30.130281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:30.130513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:30.131551 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:30.131894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:30.132943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:30.133284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:30.134525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:38:30.134858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:38:30.135869 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:30.136067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:30.137562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:30.138868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:30.139993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:38:30.141113 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:38:30.156738 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:30.160781 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:38:30.164340 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:38:30.165426 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:38:30.165539 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:30.167557 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:38:30.171874 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:38:30.174883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:30.179490 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:38:30.183815 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:38:30.186630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:30.192003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:38:30.194821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:30.197799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:30.200992 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:38:30.208803 systemd-journald[1187]: Time spent on flushing to /var/log/journal/fd9ad8092ed14e31ac45fccbac63e1b0 is 94.939ms for 1007 entries. Dec 12 18:38:30.208803 systemd-journald[1187]: System Journal (/var/log/journal/fd9ad8092ed14e31ac45fccbac63e1b0) is 8M, max 195.6M, 187.6M free. Dec 12 18:38:30.332550 systemd-journald[1187]: Received client request to flush runtime journal. Dec 12 18:38:30.332586 kernel: loop0: detected capacity change from 0 to 224512 Dec 12 18:38:30.211751 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:30.226230 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:38:30.240935 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:38:30.250191 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:38:30.343814 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:38:30.252147 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:38:30.256449 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:38:30.290246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:30.298170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:30.306507 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:38:30.325105 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Dec 12 18:38:30.325117 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Dec 12 18:38:30.337846 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:38:30.349724 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:30.365990 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:38:30.377641 kernel: loop1: detected capacity change from 0 to 8 Dec 12 18:38:30.402969 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:38:30.444269 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:38:30.440200 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:38:30.444539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:30.484281 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Dec 12 18:38:30.484579 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Dec 12 18:38:30.488839 kernel: loop4: detected capacity change from 0 to 224512 Dec 12 18:38:30.495611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:30.521740 kernel: loop5: detected capacity change from 0 to 8 Dec 12 18:38:30.530747 kernel: loop6: detected capacity change from 0 to 128560 Dec 12 18:38:30.553999 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:38:30.569520 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:38:30.570645 (sd-merge)[1247]: Merged extensions into '/usr'. Dec 12 18:38:30.577967 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:38:30.577983 systemd[1]: Reloading... Dec 12 18:38:30.708749 zram_generator::config[1274]: No configuration found. Dec 12 18:38:30.747738 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:38:30.910199 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:38:30.910670 systemd[1]: Reloading finished in 331 ms. Dec 12 18:38:30.942962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:38:30.944300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:38:30.945427 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:38:30.962969 systemd[1]: Starting ensure-sysext.service... Dec 12 18:38:30.964624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:30.972360 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:30.985431 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:38:30.985457 systemd[1]: Reloading... Dec 12 18:38:30.995212 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:38:30.996107 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:38:30.996461 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:38:30.997202 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:38:30.998400 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:38:31.001008 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Dec 12 18:38:31.001081 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Dec 12 18:38:31.009516 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:31.010778 systemd-tmpfiles[1319]: Skipping /boot Dec 12 18:38:31.028706 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:31.029508 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Dec 12 18:38:31.029915 systemd-tmpfiles[1319]: Skipping /boot Dec 12 18:38:31.114743 zram_generator::config[1359]: No configuration found. Dec 12 18:38:31.372364 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:38:31.372671 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:38:31.372883 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:38:31.415242 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:38:31.415308 systemd[1]: Reloading finished in 429 ms. Dec 12 18:38:31.419753 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:38:31.422430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:31.423735 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:31.454745 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:38:31.482749 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:38:31.506910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:38:31.513859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:31.515243 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:38:31.520965 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:38:31.524076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:31.526950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:31.529742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:31.535140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:31.537017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:31.546089 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:38:31.547772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:31.553698 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:38:31.558784 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:31.564161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:31.567297 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:38:31.568810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:31.593954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:31.594583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:31.608775 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:31.609608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:31.609820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:31.610015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:31.617614 systemd[1]: Finished ensure-sysext.service. Dec 12 18:38:31.619613 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:31.620123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:31.631262 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:38:31.654538 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:38:31.660200 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:38:31.662938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:31.663172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:31.664476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:31.664700 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:31.667310 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:31.667531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:31.670619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:38:31.680596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:31.680806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:31.684028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:31.704615 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:38:31.708702 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:38:31.721180 augenrules[1487]: No rules Dec 12 18:38:31.725062 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:38:31.725315 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:38:31.731273 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:38:31.733280 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:38:31.736116 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:38:31.757236 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:38:31.879999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:31.895094 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:38:31.895947 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:38:31.898979 systemd-networkd[1451]: lo: Link UP Dec 12 18:38:31.898988 systemd-networkd[1451]: lo: Gained carrier Dec 12 18:38:31.901128 systemd-timesyncd[1470]: No network connectivity, watching for changes. Dec 12 18:38:31.901134 systemd-networkd[1451]: Enumeration completed Dec 12 18:38:31.901202 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:31.903693 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:31.903707 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:38:31.904103 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:38:31.906247 systemd-resolved[1452]: Positive Trust Anchors: Dec 12 18:38:31.906478 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:31.906554 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:31.908021 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:38:31.909024 systemd-networkd[1451]: eth0: Link UP Dec 12 18:38:31.909340 systemd-networkd[1451]: eth0: Gained carrier Dec 12 18:38:31.909354 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:31.914564 systemd-resolved[1452]: Defaulting to hostname 'linux'. Dec 12 18:38:31.916540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:31.917340 systemd[1]: Reached target network.target - Network. Dec 12 18:38:31.918764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:31.919506 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:31.920886 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:38:31.921662 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:38:31.922423 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:38:31.931494 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:38:31.932321 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:38:31.933108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:38:31.933883 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:38:31.933914 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:31.934576 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:31.936754 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:38:31.939427 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:38:31.942328 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:38:31.943234 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:38:31.944008 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:38:31.946920 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:38:31.948185 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:38:31.950055 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:38:31.951134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:38:31.953293 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:31.954333 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:31.955098 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:31.955137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:31.956261 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:38:31.958836 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:38:31.962996 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:38:31.969782 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:38:31.973100 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:38:31.974913 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:38:31.976831 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:38:31.977944 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:38:31.980394 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:38:31.995647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:38:31.999359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:38:32.004980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:38:32.016896 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:38:32.019102 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:38:32.021193 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 12 18:38:32.022176 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 12 18:38:32.022456 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:38:32.024150 jq[1517]: false Dec 12 18:38:32.024503 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:38:32.025182 oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 12 18:38:32.026171 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 12 18:38:32.026171 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:32.026171 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 12 18:38:32.026171 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 12 18:38:32.026171 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:32.025197 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:32.025237 oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 12 18:38:32.025669 oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 12 18:38:32.025678 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:32.029569 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:38:32.034962 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:38:32.039666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:38:32.040498 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:38:32.040852 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:38:32.041069 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:38:32.048483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:38:32.048942 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:38:32.058484 extend-filesystems[1518]: Found /dev/sda6 Dec 12 18:38:32.066010 extend-filesystems[1518]: Found /dev/sda9 Dec 12 18:38:32.070307 extend-filesystems[1518]: Checking size of /dev/sda9 Dec 12 18:38:32.077124 jq[1530]: true Dec 12 18:38:32.096090 coreos-metadata[1514]: Dec 12 18:38:32.095 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:38:32.097402 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:38:32.107603 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:38:32.109056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:38:32.113859 extend-filesystems[1518]: Resized partition /dev/sda9 Dec 12 18:38:32.117237 jq[1556]: true Dec 12 18:38:32.123884 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:38:32.154386 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:38:32.159220 update_engine[1528]: I20251212 18:38:32.158439 1528 main.cc:92] Flatcar Update Engine starting Dec 12 18:38:32.160481 tar[1534]: linux-amd64/LICENSE Dec 12 18:38:32.161116 tar[1534]: linux-amd64/helm Dec 12 18:38:32.166241 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:38:32.166037 dbus-daemon[1515]: [system] SELinux support is enabled Dec 12 18:38:32.172520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:38:32.172557 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:38:32.174893 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:38:32.174913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:38:32.185969 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:38:32.189120 update_engine[1528]: I20251212 18:38:32.188923 1528 update_check_scheduler.cc:74] Next update check in 4m47s Dec 12 18:38:32.193890 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:38:32.320570 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:32.323501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:38:32.324160 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:38:32.324186 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:38:32.326280 systemd-logind[1526]: New seat seat0. Dec 12 18:38:32.329242 systemd[1]: Starting sshkeys.service... Dec 12 18:38:32.339898 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:38:32.393598 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:38:32.399776 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:38:32.437954 containerd[1549]: time="2025-12-12T18:38:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:38:32.437954 containerd[1549]: time="2025-12-12T18:38:32.436640270Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.483693980Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.35µs" Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.483776830Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.483797690Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.483971460Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.483987210Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484010470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484070600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484081550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484313430Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484327080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484337120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487275 containerd[1549]: time="2025-12-12T18:38:32.484344480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:38:32.487537 containerd[1549]: time="2025-12-12T18:38:32.484438360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:38:32.491610 containerd[1549]: time="2025-12-12T18:38:32.491334280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:32.491983 containerd[1549]: time="2025-12-12T18:38:32.491934480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:32.492659 containerd[1549]: time="2025-12-12T18:38:32.492622300Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:38:32.493542 containerd[1549]: time="2025-12-12T18:38:32.493014460Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:38:32.497877 containerd[1549]: time="2025-12-12T18:38:32.496976840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:38:32.497877 containerd[1549]: time="2025-12-12T18:38:32.497054240Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:38:32.519571 containerd[1549]: time="2025-12-12T18:38:32.519493850Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519678080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519706150Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519790430Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519808300Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519823170Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519847370Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519864430Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519883350Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519899420Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519912060Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.519928500Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.520058470Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.520078530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:38:32.520310 containerd[1549]: time="2025-12-12T18:38:32.520097600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520111080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520120950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520130200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520140220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520150100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520162850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520178100Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520193390Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520243760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:38:32.520568 containerd[1549]: time="2025-12-12T18:38:32.520255960Z" level=info msg="Start snapshots syncer" Dec 12 18:38:32.524869 containerd[1549]: time="2025-12-12T18:38:32.521898900Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:38:32.524869 containerd[1549]: time="2025-12-12T18:38:32.522171740Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522214900Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522264080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522376870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522401830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522417100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522426370Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522443100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522453440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522462430Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522481000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522490000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522499150Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522524940Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:32.525001 containerd[1549]: time="2025-12-12T18:38:32.522538400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522546580Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522555450Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522562450Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522571210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522590610Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522606220Z" level=info msg="runtime interface created" Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522611950Z" level=info msg="created NRI interface" Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522619920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522629990Z" level=info msg="Connect containerd service" Dec 12 18:38:32.525229 containerd[1549]: time="2025-12-12T18:38:32.522645950Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:38:32.533400 containerd[1549]: time="2025-12-12T18:38:32.532849020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:38:32.542784 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:38:32.560513 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:38:32.560513 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:38:32.560513 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:38:32.566241 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Dec 12 18:38:32.561726 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:38:32.563781 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:38:32.614890 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:38:32.618842 coreos-metadata[1591]: Dec 12 18:38:32.618 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:38:32.662971 systemd-networkd[1451]: eth0: DHCPv4 address 172.237.133.204/24, gateway 172.237.133.1 acquired from 23.192.120.14 Dec 12 18:38:32.664621 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1451 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:38:32.669399 systemd-timesyncd[1470]: Network configuration changed, trying to establish connection. Dec 12 18:38:32.670068 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:38:32.695857 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.714960480Z" level=info msg="Start subscribing containerd event" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715008340Z" level=info msg="Start recovering state" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715117570Z" level=info msg="Start event monitor" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715131510Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715138140Z" level=info msg="Start streaming server" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715146630Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715154250Z" level=info msg="runtime interface starting up..." Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715160100Z" level=info msg="starting plugins..." Dec 12 18:38:32.715738 containerd[1549]: time="2025-12-12T18:38:32.715172960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:38:32.716993 containerd[1549]: time="2025-12-12T18:38:32.716949640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:38:32.717111 containerd[1549]: time="2025-12-12T18:38:32.717071520Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:38:32.717246 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:38:32.721730 containerd[1549]: time="2025-12-12T18:38:32.721692000Z" level=info msg="containerd successfully booted in 0.287285s" Dec 12 18:38:32.727365 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:38:32.732785 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:38:32.755319 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:38:32.755901 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:38:32.760041 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:38:32.780282 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:38:32.784094 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:38:32.790013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:38:32.790942 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:38:32.806452 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:38:32.808353 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:38:32.809418 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1612 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:38:32.816928 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:38:33.534323 systemd-resolved[1452]: Clock change detected. Flushing caches. Dec 12 18:38:33.534694 systemd-timesyncd[1470]: Contacted time server 5.78.62.36:123 (3.flatcar.pool.ntp.org). Dec 12 18:38:33.534801 systemd-timesyncd[1470]: Initial clock synchronization to Fri 2025-12-12 18:38:33.534275 UTC. Dec 12 18:38:33.559958 tar[1534]: linux-amd64/README.md Dec 12 18:38:33.574678 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:38:33.596067 polkitd[1633]: Started polkitd version 126 Dec 12 18:38:33.599942 polkitd[1633]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:38:33.600203 polkitd[1633]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:38:33.600250 polkitd[1633]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:38:33.600441 polkitd[1633]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:38:33.600468 polkitd[1633]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:38:33.600502 polkitd[1633]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:38:33.600973 polkitd[1633]: Finished loading, compiling and executing 2 rules Dec 12 18:38:33.601174 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:38:33.602063 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:38:33.602704 polkitd[1633]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:38:33.610856 systemd-hostnamed[1612]: Hostname set to <172-237-133-204> (transient) Dec 12 18:38:33.611124 systemd-resolved[1452]: System hostname changed to '172-237-133-204'. Dec 12 18:38:33.694073 systemd-networkd[1451]: eth0: Gained IPv6LL Dec 12 18:38:33.698717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:38:33.700167 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:38:33.702682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:33.706133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:38:33.727176 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:38:33.804704 coreos-metadata[1514]: Dec 12 18:38:33.804 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:38:33.892907 coreos-metadata[1514]: Dec 12 18:38:33.892 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:38:34.079086 coreos-metadata[1514]: Dec 12 18:38:34.078 INFO Fetch successful Dec 12 18:38:34.079206 coreos-metadata[1514]: Dec 12 18:38:34.079 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:38:34.327562 coreos-metadata[1591]: Dec 12 18:38:34.327 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:38:34.339971 coreos-metadata[1514]: Dec 12 18:38:34.339 INFO Fetch successful Dec 12 18:38:34.427845 coreos-metadata[1591]: Dec 12 18:38:34.427 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:38:34.458178 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:38:34.459452 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:38:34.568806 coreos-metadata[1591]: Dec 12 18:38:34.568 INFO Fetch successful Dec 12 18:38:34.591477 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:34.593062 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:38:34.596534 systemd[1]: Finished sshkeys.service. Dec 12 18:38:34.622579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:34.623748 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:38:34.626159 systemd[1]: Startup finished in 2.911s (kernel) + 8.407s (initrd) + 4.923s (userspace) = 16.243s. Dec 12 18:38:34.679849 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:38:35.226460 kubelet[1689]: E1212 18:38:35.226371 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:38:35.230341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:38:35.230641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:38:35.231755 systemd[1]: kubelet.service: Consumed 882ms CPU time, 265.6M memory peak. Dec 12 18:38:36.100785 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:38:36.103102 systemd[1]: Started sshd@0-172.237.133.204:22-139.178.68.195:53460.service - OpenSSH per-connection server daemon (139.178.68.195:53460). Dec 12 18:38:36.446913 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 53460 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:36.448242 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:36.454389 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:38:36.455781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:38:36.463235 systemd-logind[1526]: New session 1 of user core. Dec 12 18:38:36.474248 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:38:36.477668 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:38:36.491475 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:38:36.493823 systemd-logind[1526]: New session c1 of user core. Dec 12 18:38:36.622252 systemd[1705]: Queued start job for default target default.target. Dec 12 18:38:36.634118 systemd[1705]: Created slice app.slice - User Application Slice. Dec 12 18:38:36.634146 systemd[1705]: Reached target paths.target - Paths. Dec 12 18:38:36.634190 systemd[1705]: Reached target timers.target - Timers. Dec 12 18:38:36.635583 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:38:36.646749 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:38:36.647055 systemd[1705]: Reached target sockets.target - Sockets. Dec 12 18:38:36.647111 systemd[1705]: Reached target basic.target - Basic System. Dec 12 18:38:36.647156 systemd[1705]: Reached target default.target - Main User Target. Dec 12 18:38:36.647192 systemd[1705]: Startup finished in 147ms. Dec 12 18:38:36.647313 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:38:36.654029 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:38:36.911007 systemd[1]: Started sshd@1-172.237.133.204:22-139.178.68.195:53464.service - OpenSSH per-connection server daemon (139.178.68.195:53464). Dec 12 18:38:37.246339 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 53464 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:37.248277 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:37.253658 systemd-logind[1526]: New session 2 of user core. Dec 12 18:38:37.259032 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:38:37.491811 sshd[1719]: Connection closed by 139.178.68.195 port 53464 Dec 12 18:38:37.492368 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:37.496474 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:38:37.497579 systemd[1]: sshd@1-172.237.133.204:22-139.178.68.195:53464.service: Deactivated successfully. Dec 12 18:38:37.499495 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:38:37.501006 systemd-logind[1526]: Removed session 2. Dec 12 18:38:37.552259 systemd[1]: Started sshd@2-172.237.133.204:22-139.178.68.195:53472.service - OpenSSH per-connection server daemon (139.178.68.195:53472). Dec 12 18:38:37.896327 sshd[1725]: Accepted publickey for core from 139.178.68.195 port 53472 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:37.898060 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:37.903056 systemd-logind[1526]: New session 3 of user core. Dec 12 18:38:37.913023 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:38:38.138627 sshd[1728]: Connection closed by 139.178.68.195 port 53472 Dec 12 18:38:38.139586 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:38.144503 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:38:38.145200 systemd[1]: sshd@2-172.237.133.204:22-139.178.68.195:53472.service: Deactivated successfully. Dec 12 18:38:38.147605 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:38:38.149999 systemd-logind[1526]: Removed session 3. Dec 12 18:38:38.204103 systemd[1]: Started sshd@3-172.237.133.204:22-139.178.68.195:53474.service - OpenSSH per-connection server daemon (139.178.68.195:53474). Dec 12 18:38:38.550873 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 53474 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:38.552666 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:38.558062 systemd-logind[1526]: New session 4 of user core. Dec 12 18:38:38.569063 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:38:38.803472 sshd[1738]: Connection closed by 139.178.68.195 port 53474 Dec 12 18:38:38.804003 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:38.807779 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:38:38.808512 systemd[1]: sshd@3-172.237.133.204:22-139.178.68.195:53474.service: Deactivated successfully. Dec 12 18:38:38.810370 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:38:38.811637 systemd-logind[1526]: Removed session 4. Dec 12 18:38:38.866734 systemd[1]: Started sshd@4-172.237.133.204:22-139.178.68.195:53484.service - OpenSSH per-connection server daemon (139.178.68.195:53484). Dec 12 18:38:39.207064 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 53484 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:39.208585 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:39.213373 systemd-logind[1526]: New session 5 of user core. Dec 12 18:38:39.221022 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:38:39.410163 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:38:39.410481 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:39.418826 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:39.469874 sshd[1747]: Connection closed by 139.178.68.195 port 53484 Dec 12 18:38:39.470647 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:39.476220 systemd[1]: sshd@4-172.237.133.204:22-139.178.68.195:53484.service: Deactivated successfully. Dec 12 18:38:39.478720 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:38:39.479864 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:38:39.482401 systemd-logind[1526]: Removed session 5. Dec 12 18:38:39.537214 systemd[1]: Started sshd@5-172.237.133.204:22-139.178.68.195:53486.service - OpenSSH per-connection server daemon (139.178.68.195:53486). Dec 12 18:38:39.896541 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 53486 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:39.898354 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:39.902981 systemd-logind[1526]: New session 6 of user core. Dec 12 18:38:39.910033 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:38:40.102058 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:38:40.102392 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:40.108910 sudo[1759]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:40.115595 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:38:40.115895 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:40.126771 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:38:40.168415 augenrules[1781]: No rules Dec 12 18:38:40.168888 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:38:40.169210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:38:40.170055 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 12 18:38:40.222635 sshd[1757]: Connection closed by 139.178.68.195 port 53486 Dec 12 18:38:40.223194 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:40.232125 systemd[1]: sshd@5-172.237.133.204:22-139.178.68.195:53486.service: Deactivated successfully. Dec 12 18:38:40.234830 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:38:40.235723 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:38:40.237594 systemd-logind[1526]: Removed session 6. Dec 12 18:38:40.285313 systemd[1]: Started sshd@6-172.237.133.204:22-139.178.68.195:39432.service - OpenSSH per-connection server daemon (139.178.68.195:39432). Dec 12 18:38:40.637106 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 39432 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:38:40.638883 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:40.647261 systemd-logind[1526]: New session 7 of user core. Dec 12 18:38:40.664043 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:38:40.840122 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:38:40.840497 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:41.197846 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:38:41.213442 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:38:41.467681 dockerd[1812]: time="2025-12-12T18:38:41.467497098Z" level=info msg="Starting up" Dec 12 18:38:41.468545 dockerd[1812]: time="2025-12-12T18:38:41.468504328Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:38:41.485000 dockerd[1812]: time="2025-12-12T18:38:41.484852888Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:38:41.502613 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4226339629-merged.mount: Deactivated successfully. Dec 12 18:38:41.527752 dockerd[1812]: time="2025-12-12T18:38:41.527704258Z" level=info msg="Loading containers: start." Dec 12 18:38:41.540861 kernel: Initializing XFRM netlink socket Dec 12 18:38:41.829690 systemd-networkd[1451]: docker0: Link UP Dec 12 18:38:41.834298 dockerd[1812]: time="2025-12-12T18:38:41.834235388Z" level=info msg="Loading containers: done." Dec 12 18:38:41.854879 dockerd[1812]: time="2025-12-12T18:38:41.854799828Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:38:41.855101 dockerd[1812]: time="2025-12-12T18:38:41.854941428Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:38:41.855101 dockerd[1812]: time="2025-12-12T18:38:41.855049798Z" level=info msg="Initializing buildkit" Dec 12 18:38:41.880994 dockerd[1812]: time="2025-12-12T18:38:41.880841638Z" level=info msg="Completed buildkit initialization" Dec 12 18:38:41.890308 dockerd[1812]: time="2025-12-12T18:38:41.890271438Z" level=info msg="Daemon has completed initialization" Dec 12 18:38:41.890370 dockerd[1812]: time="2025-12-12T18:38:41.890326218Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:38:41.890538 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:38:42.464623 containerd[1549]: time="2025-12-12T18:38:42.464556558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:38:43.199941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380279331.mount: Deactivated successfully. Dec 12 18:38:44.450900 containerd[1549]: time="2025-12-12T18:38:44.450832718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.451859 containerd[1549]: time="2025-12-12T18:38:44.451702328Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 12 18:38:44.452363 containerd[1549]: time="2025-12-12T18:38:44.452331758Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.454434 containerd[1549]: time="2025-12-12T18:38:44.454402798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.455249 containerd[1549]: time="2025-12-12T18:38:44.455216458Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.99060951s" Dec 12 18:38:44.455292 containerd[1549]: time="2025-12-12T18:38:44.455253138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:38:44.456421 containerd[1549]: time="2025-12-12T18:38:44.456240288Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:38:45.481245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:38:45.483635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:45.696389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:45.710783 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:38:45.760246 kubelet[2091]: E1212 18:38:45.757346 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:38:45.763744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:38:45.763980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:38:45.764674 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110.3M memory peak. Dec 12 18:38:46.091853 containerd[1549]: time="2025-12-12T18:38:46.091525108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:46.092732 containerd[1549]: time="2025-12-12T18:38:46.092691858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 12 18:38:46.093615 containerd[1549]: time="2025-12-12T18:38:46.093579178Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:46.096101 containerd[1549]: time="2025-12-12T18:38:46.096064828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:46.097034 containerd[1549]: time="2025-12-12T18:38:46.096994908Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.64070442s" Dec 12 18:38:46.097297 containerd[1549]: time="2025-12-12T18:38:46.097268988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:38:46.101828 containerd[1549]: time="2025-12-12T18:38:46.101772468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:38:47.327674 containerd[1549]: time="2025-12-12T18:38:47.327567858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:47.329223 containerd[1549]: time="2025-12-12T18:38:47.328964468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 12 18:38:47.329822 containerd[1549]: time="2025-12-12T18:38:47.329773278Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:47.332304 containerd[1549]: time="2025-12-12T18:38:47.332266908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:47.333302 containerd[1549]: time="2025-12-12T18:38:47.333241188Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.23141798s" Dec 12 18:38:47.333363 containerd[1549]: time="2025-12-12T18:38:47.333303738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:38:47.334492 containerd[1549]: time="2025-12-12T18:38:47.334442988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:38:48.571313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911630935.mount: Deactivated successfully. Dec 12 18:38:49.005176 containerd[1549]: time="2025-12-12T18:38:49.004908698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:49.005841 containerd[1549]: time="2025-12-12T18:38:49.005807448Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:38:49.008206 containerd[1549]: time="2025-12-12T18:38:49.007178198Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:49.008903 containerd[1549]: time="2025-12-12T18:38:49.008862548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:49.009497 containerd[1549]: time="2025-12-12T18:38:49.009449138Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.67497361s" Dec 12 18:38:49.009497 containerd[1549]: time="2025-12-12T18:38:49.009494118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:38:49.010634 containerd[1549]: time="2025-12-12T18:38:49.010608418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:38:49.692747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292311.mount: Deactivated successfully. Dec 12 18:38:50.395030 containerd[1549]: time="2025-12-12T18:38:50.394969988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:50.396343 containerd[1549]: time="2025-12-12T18:38:50.396316008Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 12 18:38:50.396955 containerd[1549]: time="2025-12-12T18:38:50.396898328Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:50.399939 containerd[1549]: time="2025-12-12T18:38:50.399404988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:50.400271 containerd[1549]: time="2025-12-12T18:38:50.400242488Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.3896048s" Dec 12 18:38:50.400315 containerd[1549]: time="2025-12-12T18:38:50.400277358Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:38:50.400823 containerd[1549]: time="2025-12-12T18:38:50.400798068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:38:50.957643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139745882.mount: Deactivated successfully. Dec 12 18:38:50.961396 containerd[1549]: time="2025-12-12T18:38:50.961351398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:50.962065 containerd[1549]: time="2025-12-12T18:38:50.962040658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:38:50.962749 containerd[1549]: time="2025-12-12T18:38:50.962716038Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:50.964369 containerd[1549]: time="2025-12-12T18:38:50.964322658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:50.965051 containerd[1549]: time="2025-12-12T18:38:50.964864578Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.03823ms" Dec 12 18:38:50.965051 containerd[1549]: time="2025-12-12T18:38:50.964893948Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:38:50.965560 containerd[1549]: time="2025-12-12T18:38:50.965503338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:38:51.635494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79196080.mount: Deactivated successfully. Dec 12 18:38:53.106949 containerd[1549]: time="2025-12-12T18:38:53.106874388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:53.108154 containerd[1549]: time="2025-12-12T18:38:53.107904058Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 12 18:38:53.108695 containerd[1549]: time="2025-12-12T18:38:53.108661898Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:53.111716 containerd[1549]: time="2025-12-12T18:38:53.111684908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:53.112677 containerd[1549]: time="2025-12-12T18:38:53.112653738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.14655054s" Dec 12 18:38:53.112754 containerd[1549]: time="2025-12-12T18:38:53.112739998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:38:54.996526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:54.996689 systemd[1]: kubelet.service: Consumed 219ms CPU time, 110.3M memory peak. Dec 12 18:38:55.002237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:55.031374 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Dec 12 18:38:55.031398 systemd[1]: Reloading... Dec 12 18:38:55.230959 zram_generator::config[2321]: No configuration found. Dec 12 18:38:55.410709 systemd[1]: Reloading finished in 378 ms. Dec 12 18:38:55.467850 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:38:55.468188 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:38:55.468609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:55.468721 systemd[1]: kubelet.service: Consumed 175ms CPU time, 98.2M memory peak. Dec 12 18:38:55.470785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:55.664180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:55.671700 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:38:55.717181 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:38:55.718968 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:38:55.718968 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:38:55.718968 kubelet[2346]: I1212 18:38:55.717698 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:38:55.981313 kubelet[2346]: I1212 18:38:55.980982 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:38:55.981313 kubelet[2346]: I1212 18:38:55.981026 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:38:55.981689 kubelet[2346]: I1212 18:38:55.981657 2346 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:38:56.013534 kubelet[2346]: E1212 18:38:56.013455 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.133.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.133.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:56.015744 kubelet[2346]: I1212 18:38:56.014898 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:38:56.030834 kubelet[2346]: I1212 18:38:56.030791 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:38:56.041325 kubelet[2346]: I1212 18:38:56.041257 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:38:56.044730 kubelet[2346]: I1212 18:38:56.044664 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:38:56.044991 kubelet[2346]: I1212 18:38:56.044719 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:38:56.045118 kubelet[2346]: I1212 18:38:56.044993 2346 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:38:56.045118 kubelet[2346]: I1212 18:38:56.045010 2346 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:38:56.045209 kubelet[2346]: I1212 18:38:56.045183 2346 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:38:56.049255 kubelet[2346]: I1212 18:38:56.049101 2346 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:38:56.049255 kubelet[2346]: I1212 18:38:56.049151 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:38:56.049255 kubelet[2346]: I1212 18:38:56.049185 2346 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:38:56.049255 kubelet[2346]: I1212 18:38:56.049202 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:38:56.055953 kubelet[2346]: I1212 18:38:56.055451 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:38:56.055953 kubelet[2346]: I1212 18:38:56.055891 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:38:56.056174 kubelet[2346]: W1212 18:38:56.056153 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:38:56.058853 kubelet[2346]: I1212 18:38:56.058798 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:38:56.058853 kubelet[2346]: I1212 18:38:56.058836 2346 server.go:1287] "Started kubelet" Dec 12 18:38:56.059061 kubelet[2346]: W1212 18:38:56.058997 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.133.204:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-204&limit=500&resourceVersion=0": dial tcp 172.237.133.204:6443: connect: connection refused Dec 12 18:38:56.059092 kubelet[2346]: E1212 18:38:56.059061 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.133.204:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-204&limit=500&resourceVersion=0\": dial tcp 172.237.133.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:56.070614 kubelet[2346]: E1212 18:38:56.069003 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.133.204:6443/api/v1/namespaces/default/events\": dial tcp 172.237.133.204:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-133-204.18808bc5e8bdf342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-133-204,UID:172-237-133-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-133-204,},FirstTimestamp:2025-12-12 18:38:56.058815298 +0000 UTC m=+0.380729281,LastTimestamp:2025-12-12 18:38:56.058815298 +0000 UTC m=+0.380729281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-133-204,}" Dec 12 18:38:56.070614 kubelet[2346]: I1212 18:38:56.070129 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:38:56.070614 kubelet[2346]: W1212 18:38:56.070317 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.133.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.133.204:6443: connect: connection refused Dec 12 18:38:56.070614 kubelet[2346]: E1212 18:38:56.070373 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.133.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.133.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:56.074476 kubelet[2346]: I1212 18:38:56.074418 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:38:56.075947 kubelet[2346]: I1212 18:38:56.075679 2346 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:38:56.080070 kubelet[2346]: I1212 18:38:56.079976 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:38:56.080412 kubelet[2346]: I1212 18:38:56.080375 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:38:56.080764 kubelet[2346]: I1212 18:38:56.080720 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:38:56.082878 kubelet[2346]: I1212 18:38:56.082674 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:38:56.082948 kubelet[2346]: E1212 18:38:56.082904 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-133-204\" not found" Dec 12 18:38:56.083888 kubelet[2346]: I1212 18:38:56.083856 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:38:56.084003 kubelet[2346]: I1212 18:38:56.083939 2346 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:38:56.086386 kubelet[2346]: W1212 18:38:56.086329 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.133.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.133.204:6443: connect: connection refused Dec 12 18:38:56.086442 kubelet[2346]: E1212 18:38:56.086386 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.133.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.133.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:56.086497 kubelet[2346]: E1212 18:38:56.086457 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-204?timeout=10s\": dial tcp 172.237.133.204:6443: connect: connection refused" interval="200ms" Dec 12 18:38:56.086772 kubelet[2346]: I1212 18:38:56.086600 2346 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:38:56.086772 kubelet[2346]: I1212 18:38:56.086684 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:38:56.088408 kubelet[2346]: E1212 18:38:56.088377 2346 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:38:56.089281 kubelet[2346]: I1212 18:38:56.088474 2346 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:38:56.097358 kubelet[2346]: I1212 18:38:56.097305 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:38:56.098630 kubelet[2346]: I1212 18:38:56.098610 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:38:56.098718 kubelet[2346]: I1212 18:38:56.098707 2346 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:38:56.098810 kubelet[2346]: I1212 18:38:56.098795 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:38:56.098852 kubelet[2346]: I1212 18:38:56.098845 2346 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:38:56.099088 kubelet[2346]: E1212 18:38:56.099065 2346 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:38:56.107850 kubelet[2346]: W1212 18:38:56.107809 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.133.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.133.204:6443: connect: connection refused Dec 12 18:38:56.108020 kubelet[2346]: E1212 18:38:56.107990 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.133.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.133.204:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:56.120583 kubelet[2346]: I1212 18:38:56.120551 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:38:56.120723 kubelet[2346]: I1212 18:38:56.120704 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:38:56.120821 kubelet[2346]: I1212 18:38:56.120811 2346 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:38:56.122665 kubelet[2346]: I1212 18:38:56.122650 2346 policy_none.go:49] "None policy: Start" Dec 12 18:38:56.122729 kubelet[2346]: I1212 18:38:56.122719 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:38:56.122774 kubelet[2346]: I1212 18:38:56.122766 2346 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:38:56.129377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:38:56.146295 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:38:56.150482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:38:56.164882 kubelet[2346]: I1212 18:38:56.164824 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:38:56.165410 kubelet[2346]: I1212 18:38:56.165377 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:38:56.165459 kubelet[2346]: I1212 18:38:56.165408 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:38:56.165848 kubelet[2346]: I1212 18:38:56.165808 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:38:56.168060 kubelet[2346]: E1212 18:38:56.168023 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:38:56.168769 kubelet[2346]: E1212 18:38:56.168738 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-133-204\" not found" Dec 12 18:38:56.217498 systemd[1]: Created slice kubepods-burstable-pod891b655f67431229d0ba41bd0adbae49.slice - libcontainer container kubepods-burstable-pod891b655f67431229d0ba41bd0adbae49.slice. Dec 12 18:38:56.233015 kubelet[2346]: E1212 18:38:56.232625 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:56.237087 systemd[1]: Created slice kubepods-burstable-pod0ebc7da7909dea1e0e89efea94f00601.slice - libcontainer container kubepods-burstable-pod0ebc7da7909dea1e0e89efea94f00601.slice. Dec 12 18:38:56.251408 kubelet[2346]: E1212 18:38:56.251376 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:56.254644 systemd[1]: Created slice kubepods-burstable-podf4e36dd921667c4a45e85b8f9ba32429.slice - libcontainer container kubepods-burstable-podf4e36dd921667c4a45e85b8f9ba32429.slice. Dec 12 18:38:56.257478 kubelet[2346]: E1212 18:38:56.257438 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:56.269561 kubelet[2346]: I1212 18:38:56.269526 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-204" Dec 12 18:38:56.270358 kubelet[2346]: E1212 18:38:56.270330 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.204:6443/api/v1/nodes\": dial tcp 172.237.133.204:6443: connect: connection refused" node="172-237-133-204" Dec 12 18:38:56.290113 kubelet[2346]: E1212 18:38:56.290059 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-204?timeout=10s\": dial tcp 172.237.133.204:6443: connect: connection refused" interval="400ms" Dec 12 18:38:56.385530 kubelet[2346]: I1212 18:38:56.385476 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/891b655f67431229d0ba41bd0adbae49-kubeconfig\") pod \"kube-scheduler-172-237-133-204\" (UID: \"891b655f67431229d0ba41bd0adbae49\") " pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:38:56.385530 kubelet[2346]: I1212 18:38:56.385518 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-ca-certs\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:56.385530 kubelet[2346]: I1212 18:38:56.385540 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-kubeconfig\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:56.385530 kubelet[2346]: I1212 18:38:56.385560 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:56.385890 kubelet[2346]: I1212 18:38:56.385585 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-k8s-certs\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:56.385890 kubelet[2346]: I1212 18:38:56.385607 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:56.385890 kubelet[2346]: I1212 18:38:56.385623 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-ca-certs\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:56.385890 kubelet[2346]: I1212 18:38:56.385647 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:56.385890 kubelet[2346]: I1212 18:38:56.385661 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-k8s-certs\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:56.473401 kubelet[2346]: I1212 18:38:56.473341 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-204" Dec 12 18:38:56.473747 kubelet[2346]: E1212 18:38:56.473709 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.204:6443/api/v1/nodes\": dial tcp 172.237.133.204:6443: connect: connection refused" node="172-237-133-204" Dec 12 18:38:56.536136 kubelet[2346]: E1212 18:38:56.535938 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.537349 containerd[1549]: time="2025-12-12T18:38:56.537275978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-204,Uid:891b655f67431229d0ba41bd0adbae49,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:56.553205 kubelet[2346]: E1212 18:38:56.553154 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.553740 containerd[1549]: time="2025-12-12T18:38:56.553679508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-204,Uid:0ebc7da7909dea1e0e89efea94f00601,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:56.558528 kubelet[2346]: E1212 18:38:56.558106 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.565288 containerd[1549]: time="2025-12-12T18:38:56.565252298Z" level=info msg="connecting to shim 4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63" address="unix:///run/containerd/s/4babf617749abab9273f33ab52eada70ceeaee092293deb748b5f762ba6fc51f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:56.574523 containerd[1549]: time="2025-12-12T18:38:56.574481268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-204,Uid:f4e36dd921667c4a45e85b8f9ba32429,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:56.587606 containerd[1549]: time="2025-12-12T18:38:56.587549428Z" level=info msg="connecting to shim 1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b" address="unix:///run/containerd/s/6993617840e2407c575aadedb5c72eb0050363c96a91ef7383753eb22644bf3b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:56.625659 containerd[1549]: time="2025-12-12T18:38:56.625586868Z" level=info msg="connecting to shim 186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68" address="unix:///run/containerd/s/37937453b727305bb771f2b9d00c2d30ade47647f6d0bdeb38244d5fa3f0f7ae" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:56.632315 systemd[1]: Started cri-containerd-1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b.scope - libcontainer container 1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b. Dec 12 18:38:56.646550 systemd[1]: Started cri-containerd-4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63.scope - libcontainer container 4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63. Dec 12 18:38:56.674366 systemd[1]: Started cri-containerd-186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68.scope - libcontainer container 186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68. Dec 12 18:38:56.691246 kubelet[2346]: E1212 18:38:56.691194 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-204?timeout=10s\": dial tcp 172.237.133.204:6443: connect: connection refused" interval="800ms" Dec 12 18:38:56.728304 containerd[1549]: time="2025-12-12T18:38:56.728189078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-204,Uid:891b655f67431229d0ba41bd0adbae49,Namespace:kube-system,Attempt:0,} returns sandbox id \"4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63\"" Dec 12 18:38:56.732803 kubelet[2346]: E1212 18:38:56.732741 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.739717 containerd[1549]: time="2025-12-12T18:38:56.737169248Z" level=info msg="CreateContainer within sandbox \"4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:38:56.745011 containerd[1549]: time="2025-12-12T18:38:56.744974518Z" level=info msg="Container 4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:56.750463 containerd[1549]: time="2025-12-12T18:38:56.750442458Z" level=info msg="CreateContainer within sandbox \"4337def54a390e8a72feafc5f914542c54836456c81288b052c3efb8e6c50c63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b\"" Dec 12 18:38:56.751799 containerd[1549]: time="2025-12-12T18:38:56.751784698Z" level=info msg="StartContainer for \"4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b\"" Dec 12 18:38:56.752766 containerd[1549]: time="2025-12-12T18:38:56.752746588Z" level=info msg="connecting to shim 4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b" address="unix:///run/containerd/s/4babf617749abab9273f33ab52eada70ceeaee092293deb748b5f762ba6fc51f" protocol=ttrpc version=3 Dec 12 18:38:56.781089 containerd[1549]: time="2025-12-12T18:38:56.781049578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-204,Uid:f4e36dd921667c4a45e85b8f9ba32429,Namespace:kube-system,Attempt:0,} returns sandbox id \"186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68\"" Dec 12 18:38:56.781851 kubelet[2346]: E1212 18:38:56.781816 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.783678 containerd[1549]: time="2025-12-12T18:38:56.783628138Z" level=info msg="CreateContainer within sandbox \"186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:38:56.784315 systemd[1]: Started cri-containerd-4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b.scope - libcontainer container 4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b. Dec 12 18:38:56.789830 containerd[1549]: time="2025-12-12T18:38:56.789741138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-204,Uid:0ebc7da7909dea1e0e89efea94f00601,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b\"" Dec 12 18:38:56.791337 kubelet[2346]: E1212 18:38:56.791290 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:56.794625 containerd[1549]: time="2025-12-12T18:38:56.794561348Z" level=info msg="CreateContainer within sandbox \"1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:38:56.799399 containerd[1549]: time="2025-12-12T18:38:56.799362958Z" level=info msg="Container 64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:56.802679 containerd[1549]: time="2025-12-12T18:38:56.802524088Z" level=info msg="Container b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:56.808263 containerd[1549]: time="2025-12-12T18:38:56.808153848Z" level=info msg="CreateContainer within sandbox \"186b5772d81ce5bfd852288aeca9481cfe3412a54cde66b4a961c4831dc5fc68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170\"" Dec 12 18:38:56.808677 containerd[1549]: time="2025-12-12T18:38:56.808657418Z" level=info msg="StartContainer for \"64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170\"" Dec 12 18:38:56.813695 containerd[1549]: time="2025-12-12T18:38:56.813620628Z" level=info msg="connecting to shim 64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170" address="unix:///run/containerd/s/37937453b727305bb771f2b9d00c2d30ade47647f6d0bdeb38244d5fa3f0f7ae" protocol=ttrpc version=3 Dec 12 18:38:56.820072 containerd[1549]: time="2025-12-12T18:38:56.820035598Z" level=info msg="CreateContainer within sandbox \"1f4acfbdde6379ecbade7998e74ea0397c9d492b532184d15a1c033e5c8ea15b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da\"" Dec 12 18:38:56.820671 containerd[1549]: time="2025-12-12T18:38:56.820639648Z" level=info msg="StartContainer for \"b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da\"" Dec 12 18:38:56.822446 containerd[1549]: time="2025-12-12T18:38:56.822405438Z" level=info msg="connecting to shim b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da" address="unix:///run/containerd/s/6993617840e2407c575aadedb5c72eb0050363c96a91ef7383753eb22644bf3b" protocol=ttrpc version=3 Dec 12 18:38:56.853148 systemd[1]: Started cri-containerd-b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da.scope - libcontainer container b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da. Dec 12 18:38:56.869708 systemd[1]: Started cri-containerd-64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170.scope - libcontainer container 64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170. Dec 12 18:38:56.879556 kubelet[2346]: I1212 18:38:56.879083 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-204" Dec 12 18:38:56.880096 kubelet[2346]: E1212 18:38:56.880060 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.204:6443/api/v1/nodes\": dial tcp 172.237.133.204:6443: connect: connection refused" node="172-237-133-204" Dec 12 18:38:56.891764 containerd[1549]: time="2025-12-12T18:38:56.891603758Z" level=info msg="StartContainer for \"4f3cc0405eaa422abb0e1f70ce0e09c9e4fa5cc6b03ebec06c6922051b41534b\" returns successfully" Dec 12 18:38:56.945010 containerd[1549]: time="2025-12-12T18:38:56.944964308Z" level=info msg="StartContainer for \"b1850c61c46136dc974ae36439c6c5d9d8b458ddb1edfbe57f4bc5e771dce5da\" returns successfully" Dec 12 18:38:56.996034 containerd[1549]: time="2025-12-12T18:38:56.995974948Z" level=info msg="StartContainer for \"64f4afd676e8dbf64b973d4fd5754853c420466ffb36e10c41b6e5d6cd4e4170\" returns successfully" Dec 12 18:38:57.004428 kubelet[2346]: E1212 18:38:57.004323 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.133.204:6443/api/v1/namespaces/default/events\": dial tcp 172.237.133.204:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-133-204.18808bc5e8bdf342 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-133-204,UID:172-237-133-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-133-204,},FirstTimestamp:2025-12-12 18:38:56.058815298 +0000 UTC m=+0.380729281,LastTimestamp:2025-12-12 18:38:56.058815298 +0000 UTC m=+0.380729281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-133-204,}" Dec 12 18:38:57.121039 kubelet[2346]: E1212 18:38:57.120804 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:57.121566 kubelet[2346]: E1212 18:38:57.121352 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:57.125550 kubelet[2346]: E1212 18:38:57.125422 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:57.125550 kubelet[2346]: E1212 18:38:57.125507 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:57.126532 kubelet[2346]: E1212 18:38:57.126519 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:57.126702 kubelet[2346]: E1212 18:38:57.126690 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:57.683956 kubelet[2346]: I1212 18:38:57.683545 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-204" Dec 12 18:38:58.131579 kubelet[2346]: E1212 18:38:58.131451 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:58.132388 kubelet[2346]: E1212 18:38:58.132185 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:58.132864 kubelet[2346]: E1212 18:38:58.132851 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:58.133074 kubelet[2346]: E1212 18:38:58.133038 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:58.235054 kubelet[2346]: E1212 18:38:58.234868 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:58.235478 kubelet[2346]: E1212 18:38:58.235394 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:38:58.661554 kubelet[2346]: E1212 18:38:58.661507 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-133-204\" not found" node="172-237-133-204" Dec 12 18:38:58.851434 kubelet[2346]: I1212 18:38:58.851353 2346 kubelet_node_status.go:78] "Successfully registered node" node="172-237-133-204" Dec 12 18:38:58.851434 kubelet[2346]: E1212 18:38:58.851393 2346 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-133-204\": node \"172-237-133-204\" not found" Dec 12 18:38:58.883820 kubelet[2346]: I1212 18:38:58.883770 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:38:58.892264 kubelet[2346]: E1212 18:38:58.892020 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-133-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:38:58.892264 kubelet[2346]: I1212 18:38:58.892242 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:58.893765 kubelet[2346]: E1212 18:38:58.893701 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-133-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:58.893981 kubelet[2346]: I1212 18:38:58.893851 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:58.897338 kubelet[2346]: E1212 18:38:58.897316 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-133-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:38:59.075049 kubelet[2346]: I1212 18:38:59.073167 2346 apiserver.go:52] "Watching apiserver" Dec 12 18:38:59.084498 kubelet[2346]: I1212 18:38:59.084445 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:38:59.128548 kubelet[2346]: I1212 18:38:59.128478 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:59.131011 kubelet[2346]: E1212 18:38:59.130954 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-133-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:38:59.131221 kubelet[2346]: E1212 18:38:59.131200 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:00.622352 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Dec 12 18:39:00.622374 systemd[1]: Reloading... Dec 12 18:39:00.772014 zram_generator::config[2669]: No configuration found. Dec 12 18:39:00.990258 systemd[1]: Reloading finished in 367 ms. Dec 12 18:39:01.027702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:01.053806 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:39:01.054364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:01.054431 systemd[1]: kubelet.service: Consumed 859ms CPU time, 131.3M memory peak. Dec 12 18:39:01.057141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:01.265221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:01.277352 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:39:01.341036 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:01.341036 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:39:01.341036 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:01.341561 kubelet[2716]: I1212 18:39:01.341122 2716 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:39:01.349549 kubelet[2716]: I1212 18:39:01.349491 2716 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:39:01.349549 kubelet[2716]: I1212 18:39:01.349522 2716 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:39:01.349859 kubelet[2716]: I1212 18:39:01.349839 2716 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:39:01.355990 kubelet[2716]: I1212 18:39:01.355060 2716 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:39:01.357703 kubelet[2716]: I1212 18:39:01.357679 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:39:01.363066 kubelet[2716]: I1212 18:39:01.363030 2716 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:39:01.368802 kubelet[2716]: I1212 18:39:01.368718 2716 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:39:01.369698 kubelet[2716]: I1212 18:39:01.369385 2716 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:39:01.369698 kubelet[2716]: I1212 18:39:01.369412 2716 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:39:01.369698 kubelet[2716]: I1212 18:39:01.369601 2716 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:39:01.369698 kubelet[2716]: I1212 18:39:01.369612 2716 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:39:01.369866 kubelet[2716]: I1212 18:39:01.369664 2716 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:01.370307 kubelet[2716]: I1212 18:39:01.370297 2716 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:39:01.370996 kubelet[2716]: I1212 18:39:01.370959 2716 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:39:01.371056 kubelet[2716]: I1212 18:39:01.371015 2716 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:39:01.371056 kubelet[2716]: I1212 18:39:01.371034 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:39:01.373758 kubelet[2716]: I1212 18:39:01.372644 2716 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:39:01.376382 kubelet[2716]: I1212 18:39:01.374274 2716 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:39:01.376382 kubelet[2716]: I1212 18:39:01.374783 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:39:01.376382 kubelet[2716]: I1212 18:39:01.374810 2716 server.go:1287] "Started kubelet" Dec 12 18:39:01.385404 kubelet[2716]: I1212 18:39:01.385179 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:39:01.389707 kubelet[2716]: E1212 18:39:01.389684 2716 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:39:01.389959 kubelet[2716]: I1212 18:39:01.389906 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:39:01.390111 kubelet[2716]: I1212 18:39:01.390087 2716 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:39:01.391333 kubelet[2716]: I1212 18:39:01.391317 2716 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:39:01.393541 kubelet[2716]: I1212 18:39:01.393524 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:39:01.393842 kubelet[2716]: E1212 18:39:01.393823 2716 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-133-204\" not found" Dec 12 18:39:01.394597 kubelet[2716]: I1212 18:39:01.394584 2716 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:39:01.395789 kubelet[2716]: I1212 18:39:01.394714 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:39:01.396953 kubelet[2716]: I1212 18:39:01.396113 2716 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:39:01.400955 kubelet[2716]: I1212 18:39:01.400648 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:39:01.407663 kubelet[2716]: I1212 18:39:01.407638 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:39:01.408144 kubelet[2716]: I1212 18:39:01.408104 2716 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:39:01.408250 kubelet[2716]: I1212 18:39:01.408212 2716 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:39:01.409321 kubelet[2716]: I1212 18:39:01.409304 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:39:01.409407 kubelet[2716]: I1212 18:39:01.409396 2716 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:39:01.409476 kubelet[2716]: I1212 18:39:01.409466 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:39:01.409517 kubelet[2716]: I1212 18:39:01.409510 2716 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:39:01.410163 kubelet[2716]: E1212 18:39:01.410002 2716 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:39:01.417462 kubelet[2716]: I1212 18:39:01.417202 2716 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:39:01.480677 kubelet[2716]: I1212 18:39:01.480618 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:39:01.480897 kubelet[2716]: I1212 18:39:01.480878 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:39:01.480993 kubelet[2716]: I1212 18:39:01.480983 2716 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:01.481447 kubelet[2716]: I1212 18:39:01.481431 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:39:01.481517 kubelet[2716]: I1212 18:39:01.481495 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:39:01.481583 kubelet[2716]: I1212 18:39:01.481574 2716 policy_none.go:49] "None policy: Start" Dec 12 18:39:01.481637 kubelet[2716]: I1212 18:39:01.481629 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:39:01.481688 kubelet[2716]: I1212 18:39:01.481680 2716 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:39:01.481873 kubelet[2716]: I1212 18:39:01.481862 2716 state_mem.go:75] "Updated machine memory state" Dec 12 18:39:01.488636 kubelet[2716]: I1212 18:39:01.488591 2716 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:39:01.488829 kubelet[2716]: I1212 18:39:01.488797 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:39:01.488867 kubelet[2716]: I1212 18:39:01.488818 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:39:01.489474 kubelet[2716]: I1212 18:39:01.489390 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:39:01.493107 kubelet[2716]: E1212 18:39:01.491807 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:39:01.510508 kubelet[2716]: I1212 18:39:01.510480 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:39:01.512405 kubelet[2716]: I1212 18:39:01.511004 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:39:01.512887 kubelet[2716]: I1212 18:39:01.511414 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.594192 kubelet[2716]: I1212 18:39:01.593804 2716 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-204" Dec 12 18:39:01.604994 kubelet[2716]: I1212 18:39:01.604486 2716 kubelet_node_status.go:124] "Node was previously registered" node="172-237-133-204" Dec 12 18:39:01.604994 kubelet[2716]: I1212 18:39:01.604684 2716 kubelet_node_status.go:78] "Successfully registered node" node="172-237-133-204" Dec 12 18:39:01.697195 kubelet[2716]: I1212 18:39:01.696771 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-ca-certs\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.697195 kubelet[2716]: I1212 18:39:01.696827 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.697195 kubelet[2716]: I1212 18:39:01.696847 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-k8s-certs\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.697195 kubelet[2716]: I1212 18:39:01.696865 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/891b655f67431229d0ba41bd0adbae49-kubeconfig\") pod \"kube-scheduler-172-237-133-204\" (UID: \"891b655f67431229d0ba41bd0adbae49\") " pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:39:01.697195 kubelet[2716]: I1212 18:39:01.696882 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-ca-certs\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:39:01.697509 kubelet[2716]: I1212 18:39:01.696895 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-k8s-certs\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:39:01.697509 kubelet[2716]: I1212 18:39:01.696940 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.697509 kubelet[2716]: I1212 18:39:01.696959 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ebc7da7909dea1e0e89efea94f00601-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-204\" (UID: \"0ebc7da7909dea1e0e89efea94f00601\") " pod="kube-system/kube-apiserver-172-237-133-204" Dec 12 18:39:01.697509 kubelet[2716]: I1212 18:39:01.696975 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4e36dd921667c4a45e85b8f9ba32429-kubeconfig\") pod \"kube-controller-manager-172-237-133-204\" (UID: \"f4e36dd921667c4a45e85b8f9ba32429\") " pod="kube-system/kube-controller-manager-172-237-133-204" Dec 12 18:39:01.822797 kubelet[2716]: E1212 18:39:01.822709 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:01.823327 kubelet[2716]: E1212 18:39:01.823163 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:01.824048 kubelet[2716]: E1212 18:39:01.824017 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:02.372946 kubelet[2716]: I1212 18:39:02.372666 2716 apiserver.go:52] "Watching apiserver" Dec 12 18:39:02.401713 kubelet[2716]: I1212 18:39:02.401667 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:39:02.454125 kubelet[2716]: I1212 18:39:02.454067 2716 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:39:02.454605 kubelet[2716]: E1212 18:39:02.454572 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:02.456010 kubelet[2716]: E1212 18:39:02.455205 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:02.462202 kubelet[2716]: E1212 18:39:02.462150 2716 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-133-204\" already exists" pod="kube-system/kube-scheduler-172-237-133-204" Dec 12 18:39:02.462269 kubelet[2716]: E1212 18:39:02.462259 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:02.490062 kubelet[2716]: I1212 18:39:02.490006 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-133-204" podStartSLOduration=1.489977388 podStartE2EDuration="1.489977388s" podCreationTimestamp="2025-12-12 18:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.489816778 +0000 UTC m=+1.205920361" watchObservedRunningTime="2025-12-12 18:39:02.489977388 +0000 UTC m=+1.206080971" Dec 12 18:39:02.510535 kubelet[2716]: I1212 18:39:02.510466 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-133-204" podStartSLOduration=1.5104443079999998 podStartE2EDuration="1.510444308s" podCreationTimestamp="2025-12-12 18:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.502824778 +0000 UTC m=+1.218928361" watchObservedRunningTime="2025-12-12 18:39:02.510444308 +0000 UTC m=+1.226547901" Dec 12 18:39:02.519872 kubelet[2716]: I1212 18:39:02.519788 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-133-204" podStartSLOduration=1.519779948 podStartE2EDuration="1.519779948s" podCreationTimestamp="2025-12-12 18:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.510630508 +0000 UTC m=+1.226734091" watchObservedRunningTime="2025-12-12 18:39:02.519779948 +0000 UTC m=+1.235883531" Dec 12 18:39:03.454253 kubelet[2716]: E1212 18:39:03.454139 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:03.454253 kubelet[2716]: E1212 18:39:03.454158 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:03.638437 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:39:05.549615 kubelet[2716]: E1212 18:39:05.549545 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:06.418991 kubelet[2716]: I1212 18:39:06.418946 2716 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:39:06.419801 containerd[1549]: time="2025-12-12T18:39:06.419764218Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:39:06.420285 kubelet[2716]: I1212 18:39:06.420050 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:39:07.419140 systemd[1]: Created slice kubepods-besteffort-pod1dc8b501_6630_435e_9929_60915e971100.slice - libcontainer container kubepods-besteffort-pod1dc8b501_6630_435e_9929_60915e971100.slice. Dec 12 18:39:07.437381 kubelet[2716]: I1212 18:39:07.437227 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dc8b501-6630-435e-9929-60915e971100-xtables-lock\") pod \"kube-proxy-6j5kg\" (UID: \"1dc8b501-6630-435e-9929-60915e971100\") " pod="kube-system/kube-proxy-6j5kg" Dec 12 18:39:07.437381 kubelet[2716]: I1212 18:39:07.437278 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dc8b501-6630-435e-9929-60915e971100-kube-proxy\") pod \"kube-proxy-6j5kg\" (UID: \"1dc8b501-6630-435e-9929-60915e971100\") " pod="kube-system/kube-proxy-6j5kg" Dec 12 18:39:07.437381 kubelet[2716]: I1212 18:39:07.437307 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dc8b501-6630-435e-9929-60915e971100-lib-modules\") pod \"kube-proxy-6j5kg\" (UID: \"1dc8b501-6630-435e-9929-60915e971100\") " pod="kube-system/kube-proxy-6j5kg" Dec 12 18:39:07.437381 kubelet[2716]: I1212 18:39:07.437325 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpg2d\" (UniqueName: \"kubernetes.io/projected/1dc8b501-6630-435e-9929-60915e971100-kube-api-access-zpg2d\") pod \"kube-proxy-6j5kg\" (UID: \"1dc8b501-6630-435e-9929-60915e971100\") " pod="kube-system/kube-proxy-6j5kg" Dec 12 18:39:07.537580 kubelet[2716]: I1212 18:39:07.537542 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk42l\" (UniqueName: \"kubernetes.io/projected/f009091e-6f99-4a59-b493-33d976091845-kube-api-access-sk42l\") pod \"tigera-operator-7dcd859c48-xdd9m\" (UID: \"f009091e-6f99-4a59-b493-33d976091845\") " pod="tigera-operator/tigera-operator-7dcd859c48-xdd9m" Dec 12 18:39:07.537730 kubelet[2716]: I1212 18:39:07.537611 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f009091e-6f99-4a59-b493-33d976091845-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xdd9m\" (UID: \"f009091e-6f99-4a59-b493-33d976091845\") " pod="tigera-operator/tigera-operator-7dcd859c48-xdd9m" Dec 12 18:39:07.541272 systemd[1]: Created slice kubepods-besteffort-podf009091e_6f99_4a59_b493_33d976091845.slice - libcontainer container kubepods-besteffort-podf009091e_6f99_4a59_b493_33d976091845.slice. Dec 12 18:39:07.727687 kubelet[2716]: E1212 18:39:07.727397 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:07.730012 containerd[1549]: time="2025-12-12T18:39:07.729978990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6j5kg,Uid:1dc8b501-6630-435e-9929-60915e971100,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:07.749265 containerd[1549]: time="2025-12-12T18:39:07.749216186Z" level=info msg="connecting to shim 5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9" address="unix:///run/containerd/s/29e1ea13fcefd5da66a368b3afb811e87e8b02fc649eb940315c69cb6903686a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:07.787203 systemd[1]: Started cri-containerd-5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9.scope - libcontainer container 5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9. Dec 12 18:39:07.834428 containerd[1549]: time="2025-12-12T18:39:07.834365606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6j5kg,Uid:1dc8b501-6630-435e-9929-60915e971100,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9\"" Dec 12 18:39:07.835664 kubelet[2716]: E1212 18:39:07.835607 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:07.838656 containerd[1549]: time="2025-12-12T18:39:07.838596885Z" level=info msg="CreateContainer within sandbox \"5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:39:07.854612 containerd[1549]: time="2025-12-12T18:39:07.854314647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xdd9m,Uid:f009091e-6f99-4a59-b493-33d976091845,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:39:07.855649 containerd[1549]: time="2025-12-12T18:39:07.855628113Z" level=info msg="Container 3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:07.862232 containerd[1549]: time="2025-12-12T18:39:07.862186583Z" level=info msg="CreateContainer within sandbox \"5b0c0e12ff0fe07502961c933b8108598a3c529dfb77a2bf7ca143b0d2ee5fa9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523\"" Dec 12 18:39:07.863206 containerd[1549]: time="2025-12-12T18:39:07.863083505Z" level=info msg="StartContainer for \"3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523\"" Dec 12 18:39:07.865207 containerd[1549]: time="2025-12-12T18:39:07.864724913Z" level=info msg="connecting to shim 3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523" address="unix:///run/containerd/s/29e1ea13fcefd5da66a368b3afb811e87e8b02fc649eb940315c69cb6903686a" protocol=ttrpc version=3 Dec 12 18:39:07.877291 containerd[1549]: time="2025-12-12T18:39:07.877184320Z" level=info msg="connecting to shim a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a" address="unix:///run/containerd/s/70d385170a6a6b5b8ba67b620d28ac08d79efdbc12e158f32ea3fc56c470db98" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:07.899192 systemd[1]: Started cri-containerd-3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523.scope - libcontainer container 3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523. Dec 12 18:39:07.917358 systemd[1]: Started cri-containerd-a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a.scope - libcontainer container a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a. Dec 12 18:39:08.000974 containerd[1549]: time="2025-12-12T18:39:07.999998173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xdd9m,Uid:f009091e-6f99-4a59-b493-33d976091845,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a\"" Dec 12 18:39:08.005042 containerd[1549]: time="2025-12-12T18:39:08.004945918Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:39:08.009033 containerd[1549]: time="2025-12-12T18:39:08.008789395Z" level=info msg="StartContainer for \"3f8b538215472d4e0b7f78c8a8a2bbb8550d2ed63eee39f72f1e5db647a22523\" returns successfully" Dec 12 18:39:08.470720 kubelet[2716]: E1212 18:39:08.470678 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:08.479615 kubelet[2716]: I1212 18:39:08.479281 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6j5kg" podStartSLOduration=1.479269275 podStartE2EDuration="1.479269275s" podCreationTimestamp="2025-12-12 18:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:08.478751328 +0000 UTC m=+7.194854911" watchObservedRunningTime="2025-12-12 18:39:08.479269275 +0000 UTC m=+7.195372858" Dec 12 18:39:09.060127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459702266.mount: Deactivated successfully. Dec 12 18:39:09.795864 kubelet[2716]: E1212 18:39:09.794082 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:09.881478 containerd[1549]: time="2025-12-12T18:39:09.881423372Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.882462 containerd[1549]: time="2025-12-12T18:39:09.882224666Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:39:09.883163 containerd[1549]: time="2025-12-12T18:39:09.883126954Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.884744 containerd[1549]: time="2025-12-12T18:39:09.884721304Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:09.885403 containerd[1549]: time="2025-12-12T18:39:09.885360783Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.880358073s" Dec 12 18:39:09.885403 containerd[1549]: time="2025-12-12T18:39:09.885401005Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:39:09.889430 containerd[1549]: time="2025-12-12T18:39:09.888468059Z" level=info msg="CreateContainer within sandbox \"a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:39:09.897335 containerd[1549]: time="2025-12-12T18:39:09.896790956Z" level=info msg="Container 7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:09.900494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309850474.mount: Deactivated successfully. Dec 12 18:39:09.912517 containerd[1549]: time="2025-12-12T18:39:09.912473920Z" level=info msg="CreateContainer within sandbox \"a4864703e6f0040c48c3aca048323a65940b815f0483a2fdc5edfc17db76105a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804\"" Dec 12 18:39:09.913243 containerd[1549]: time="2025-12-12T18:39:09.913192512Z" level=info msg="StartContainer for \"7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804\"" Dec 12 18:39:09.914749 containerd[1549]: time="2025-12-12T18:39:09.914662218Z" level=info msg="connecting to shim 7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804" address="unix:///run/containerd/s/70d385170a6a6b5b8ba67b620d28ac08d79efdbc12e158f32ea3fc56c470db98" protocol=ttrpc version=3 Dec 12 18:39:09.944217 systemd[1]: Started cri-containerd-7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804.scope - libcontainer container 7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804. Dec 12 18:39:09.992592 containerd[1549]: time="2025-12-12T18:39:09.991578172Z" level=info msg="StartContainer for \"7d71740294fb19d7254a9d46361e8949f8018165433cf7060bc75e54f0ebb804\" returns successfully" Dec 12 18:39:10.478961 kubelet[2716]: E1212 18:39:10.478334 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:10.488702 kubelet[2716]: I1212 18:39:10.488599 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xdd9m" podStartSLOduration=1.605531823 podStartE2EDuration="3.488570601s" podCreationTimestamp="2025-12-12 18:39:07 +0000 UTC" firstStartedPulling="2025-12-12 18:39:08.003841442 +0000 UTC m=+6.719945025" lastFinishedPulling="2025-12-12 18:39:09.88688022 +0000 UTC m=+8.602983803" observedRunningTime="2025-12-12 18:39:10.488136369 +0000 UTC m=+9.204239962" watchObservedRunningTime="2025-12-12 18:39:10.488570601 +0000 UTC m=+9.204674184" Dec 12 18:39:10.906431 kubelet[2716]: E1212 18:39:10.905205 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:11.489118 kubelet[2716]: E1212 18:39:11.488287 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:11.495349 kubelet[2716]: E1212 18:39:11.495078 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:15.553775 kubelet[2716]: E1212 18:39:15.553607 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:15.901457 sudo[1794]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:15.953615 sshd[1793]: Connection closed by 139.178.68.195 port 39432 Dec 12 18:39:15.955124 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:15.965816 systemd[1]: sshd@6-172.237.133.204:22-139.178.68.195:39432.service: Deactivated successfully. Dec 12 18:39:15.968314 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:39:15.972239 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:39:15.972813 systemd[1]: session-7.scope: Consumed 4.008s CPU time, 225.8M memory peak. Dec 12 18:39:15.980365 systemd-logind[1526]: Removed session 7. Dec 12 18:39:16.504751 kubelet[2716]: E1212 18:39:16.504708 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:18.386132 update_engine[1528]: I20251212 18:39:18.384954 1528 update_attempter.cc:509] Updating boot flags... Dec 12 18:39:21.140393 systemd[1]: Created slice kubepods-besteffort-pod65df9178_d542_4a8a_99e3_a47b277fc16b.slice - libcontainer container kubepods-besteffort-pod65df9178_d542_4a8a_99e3_a47b277fc16b.slice. Dec 12 18:39:21.227784 kubelet[2716]: I1212 18:39:21.227747 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm6br\" (UniqueName: \"kubernetes.io/projected/65df9178-d542-4a8a-99e3-a47b277fc16b-kube-api-access-bm6br\") pod \"calico-typha-675cfdb95b-wx2df\" (UID: \"65df9178-d542-4a8a-99e3-a47b277fc16b\") " pod="calico-system/calico-typha-675cfdb95b-wx2df" Dec 12 18:39:21.228424 kubelet[2716]: I1212 18:39:21.227988 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65df9178-d542-4a8a-99e3-a47b277fc16b-tigera-ca-bundle\") pod \"calico-typha-675cfdb95b-wx2df\" (UID: \"65df9178-d542-4a8a-99e3-a47b277fc16b\") " pod="calico-system/calico-typha-675cfdb95b-wx2df" Dec 12 18:39:21.228424 kubelet[2716]: I1212 18:39:21.228047 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65df9178-d542-4a8a-99e3-a47b277fc16b-typha-certs\") pod \"calico-typha-675cfdb95b-wx2df\" (UID: \"65df9178-d542-4a8a-99e3-a47b277fc16b\") " pod="calico-system/calico-typha-675cfdb95b-wx2df" Dec 12 18:39:21.253267 systemd[1]: Created slice kubepods-besteffort-pod59548a68_21c2_4f31_845b_37aaa63b4e18.slice - libcontainer container kubepods-besteffort-pod59548a68_21c2_4f31_845b_37aaa63b4e18.slice. Dec 12 18:39:21.429624 kubelet[2716]: I1212 18:39:21.429491 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-var-lib-calico\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.429624 kubelet[2716]: I1212 18:39:21.429542 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59548a68-21c2-4f31-845b-37aaa63b4e18-tigera-ca-bundle\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.429624 kubelet[2716]: I1212 18:39:21.429560 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-cni-net-dir\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.429624 kubelet[2716]: I1212 18:39:21.429576 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-xtables-lock\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.429624 kubelet[2716]: I1212 18:39:21.429594 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl7t\" (UniqueName: \"kubernetes.io/projected/59548a68-21c2-4f31-845b-37aaa63b4e18-kube-api-access-8jl7t\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430321 kubelet[2716]: I1212 18:39:21.429612 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-cni-bin-dir\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430321 kubelet[2716]: I1212 18:39:21.429627 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-cni-log-dir\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430321 kubelet[2716]: I1212 18:39:21.429833 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/59548a68-21c2-4f31-845b-37aaa63b4e18-node-certs\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430321 kubelet[2716]: I1212 18:39:21.429849 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-var-run-calico\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430321 kubelet[2716]: I1212 18:39:21.429889 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-policysync\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430505 kubelet[2716]: I1212 18:39:21.429952 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-flexvol-driver-host\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.430505 kubelet[2716]: I1212 18:39:21.429979 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59548a68-21c2-4f31-845b-37aaa63b4e18-lib-modules\") pod \"calico-node-cqrh4\" (UID: \"59548a68-21c2-4f31-845b-37aaa63b4e18\") " pod="calico-system/calico-node-cqrh4" Dec 12 18:39:21.450657 kubelet[2716]: E1212 18:39:21.449986 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:21.452033 containerd[1549]: time="2025-12-12T18:39:21.451130936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675cfdb95b-wx2df,Uid:65df9178-d542-4a8a-99e3-a47b277fc16b,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:21.454435 kubelet[2716]: E1212 18:39:21.454383 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:21.482830 containerd[1549]: time="2025-12-12T18:39:21.482787797Z" level=info msg="connecting to shim 19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd" address="unix:///run/containerd/s/c118f337f7681dc8a6031b246be24b79673639f34322040dcacab02a275a96be" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:21.524132 systemd[1]: Started cri-containerd-19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd.scope - libcontainer container 19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd. Dec 12 18:39:21.544701 kubelet[2716]: E1212 18:39:21.544670 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.545020 kubelet[2716]: W1212 18:39:21.544739 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.546297 kubelet[2716]: E1212 18:39:21.546087 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.549585 kubelet[2716]: E1212 18:39:21.549562 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.549585 kubelet[2716]: W1212 18:39:21.549581 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.549856 kubelet[2716]: E1212 18:39:21.549598 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.556939 kubelet[2716]: E1212 18:39:21.556889 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:21.560253 containerd[1549]: time="2025-12-12T18:39:21.559251735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqrh4,Uid:59548a68-21c2-4f31-845b-37aaa63b4e18,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:21.608800 containerd[1549]: time="2025-12-12T18:39:21.608362343Z" level=info msg="connecting to shim e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080" address="unix:///run/containerd/s/3cf40bd46f09a22458f4a28625f8554aec1cbc1aba3c06381804c04527e6148d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:21.620089 containerd[1549]: time="2025-12-12T18:39:21.620049130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675cfdb95b-wx2df,Uid:65df9178-d542-4a8a-99e3-a47b277fc16b,Namespace:calico-system,Attempt:0,} returns sandbox id \"19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd\"" Dec 12 18:39:21.622725 kubelet[2716]: E1212 18:39:21.622003 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:21.623951 containerd[1549]: time="2025-12-12T18:39:21.623899974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:39:21.633510 kubelet[2716]: E1212 18:39:21.633454 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.634267 kubelet[2716]: W1212 18:39:21.633959 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.634267 kubelet[2716]: E1212 18:39:21.633985 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.634267 kubelet[2716]: I1212 18:39:21.634118 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e77788b-998e-4510-9ab0-47ab12a2af9d-kubelet-dir\") pod \"csi-node-driver-ht9b9\" (UID: \"3e77788b-998e-4510-9ab0-47ab12a2af9d\") " pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:21.636451 kubelet[2716]: E1212 18:39:21.636113 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.636451 kubelet[2716]: W1212 18:39:21.636155 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.636451 kubelet[2716]: E1212 18:39:21.636170 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.636451 kubelet[2716]: I1212 18:39:21.636187 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e77788b-998e-4510-9ab0-47ab12a2af9d-socket-dir\") pod \"csi-node-driver-ht9b9\" (UID: \"3e77788b-998e-4510-9ab0-47ab12a2af9d\") " pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:21.636659 kubelet[2716]: E1212 18:39:21.636626 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.636659 kubelet[2716]: W1212 18:39:21.636638 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.636836 kubelet[2716]: E1212 18:39:21.636824 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.639505 kubelet[2716]: I1212 18:39:21.639286 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2nb\" (UniqueName: \"kubernetes.io/projected/3e77788b-998e-4510-9ab0-47ab12a2af9d-kube-api-access-gj2nb\") pod \"csi-node-driver-ht9b9\" (UID: \"3e77788b-998e-4510-9ab0-47ab12a2af9d\") " pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:21.639505 kubelet[2716]: E1212 18:39:21.639369 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.639505 kubelet[2716]: W1212 18:39:21.639377 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.639505 kubelet[2716]: E1212 18:39:21.639388 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.640673 kubelet[2716]: E1212 18:39:21.640616 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.640673 kubelet[2716]: W1212 18:39:21.640653 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.641749 kubelet[2716]: E1212 18:39:21.641714 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.642634 kubelet[2716]: E1212 18:39:21.642469 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.642634 kubelet[2716]: W1212 18:39:21.642490 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.643403 kubelet[2716]: E1212 18:39:21.643385 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.644509 kubelet[2716]: E1212 18:39:21.643886 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.644509 kubelet[2716]: W1212 18:39:21.644360 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.646469 kubelet[2716]: E1212 18:39:21.646427 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.646469 kubelet[2716]: W1212 18:39:21.646450 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.648093 kubelet[2716]: E1212 18:39:21.648013 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.648093 kubelet[2716]: W1212 18:39:21.648033 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.648093 kubelet[2716]: E1212 18:39:21.648052 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.648093 kubelet[2716]: E1212 18:39:21.648087 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.648198 kubelet[2716]: I1212 18:39:21.648118 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e77788b-998e-4510-9ab0-47ab12a2af9d-varrun\") pod \"csi-node-driver-ht9b9\" (UID: \"3e77788b-998e-4510-9ab0-47ab12a2af9d\") " pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:21.648646 kubelet[2716]: E1212 18:39:21.648564 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.648646 kubelet[2716]: W1212 18:39:21.648585 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.648646 kubelet[2716]: E1212 18:39:21.648596 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.648646 kubelet[2716]: I1212 18:39:21.648612 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e77788b-998e-4510-9ab0-47ab12a2af9d-registration-dir\") pod \"csi-node-driver-ht9b9\" (UID: \"3e77788b-998e-4510-9ab0-47ab12a2af9d\") " pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:21.649288 kubelet[2716]: E1212 18:39:21.649191 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.649288 kubelet[2716]: W1212 18:39:21.649219 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.649288 kubelet[2716]: E1212 18:39:21.649232 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.649700 kubelet[2716]: E1212 18:39:21.649673 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.650291 kubelet[2716]: E1212 18:39:21.650226 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.650291 kubelet[2716]: W1212 18:39:21.650241 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.650291 kubelet[2716]: E1212 18:39:21.650250 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.655163 kubelet[2716]: E1212 18:39:21.655031 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.655163 kubelet[2716]: W1212 18:39:21.655048 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.655163 kubelet[2716]: E1212 18:39:21.655061 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.655375 kubelet[2716]: E1212 18:39:21.655363 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.655425 kubelet[2716]: W1212 18:39:21.655414 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.655547 kubelet[2716]: E1212 18:39:21.655458 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.655697 kubelet[2716]: E1212 18:39:21.655685 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.655758 kubelet[2716]: W1212 18:39:21.655746 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.655819 kubelet[2716]: E1212 18:39:21.655796 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.675092 systemd[1]: Started cri-containerd-e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080.scope - libcontainer container e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080. Dec 12 18:39:21.705829 containerd[1549]: time="2025-12-12T18:39:21.705744539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cqrh4,Uid:59548a68-21c2-4f31-845b-37aaa63b4e18,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\"" Dec 12 18:39:21.707975 kubelet[2716]: E1212 18:39:21.707956 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:21.749974 kubelet[2716]: E1212 18:39:21.749910 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.750143 kubelet[2716]: W1212 18:39:21.750102 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.750143 kubelet[2716]: E1212 18:39:21.750132 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.750557 kubelet[2716]: E1212 18:39:21.750529 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.750557 kubelet[2716]: W1212 18:39:21.750540 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.750557 kubelet[2716]: E1212 18:39:21.750553 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.750850 kubelet[2716]: E1212 18:39:21.750821 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.750850 kubelet[2716]: W1212 18:39:21.750835 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.750945 kubelet[2716]: E1212 18:39:21.750864 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.751203 kubelet[2716]: E1212 18:39:21.751185 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.751203 kubelet[2716]: W1212 18:39:21.751201 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.751292 kubelet[2716]: E1212 18:39:21.751217 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.751503 kubelet[2716]: E1212 18:39:21.751470 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.751503 kubelet[2716]: W1212 18:39:21.751482 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.751503 kubelet[2716]: E1212 18:39:21.751500 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.751765 kubelet[2716]: E1212 18:39:21.751749 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.751765 kubelet[2716]: W1212 18:39:21.751762 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.751817 kubelet[2716]: E1212 18:39:21.751779 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.752032 kubelet[2716]: E1212 18:39:21.752015 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.752032 kubelet[2716]: W1212 18:39:21.752029 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.752212 kubelet[2716]: E1212 18:39:21.752127 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.752350 kubelet[2716]: E1212 18:39:21.752262 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.752350 kubelet[2716]: W1212 18:39:21.752276 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.752404 kubelet[2716]: E1212 18:39:21.752381 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.752575 kubelet[2716]: E1212 18:39:21.752558 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.752575 kubelet[2716]: W1212 18:39:21.752570 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.752664 kubelet[2716]: E1212 18:39:21.752652 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.752854 kubelet[2716]: E1212 18:39:21.752837 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.752854 kubelet[2716]: W1212 18:39:21.752849 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.752965 kubelet[2716]: E1212 18:39:21.752943 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.753143 kubelet[2716]: E1212 18:39:21.753126 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.753143 kubelet[2716]: W1212 18:39:21.753138 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.753346 kubelet[2716]: E1212 18:39:21.753265 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.753424 kubelet[2716]: E1212 18:39:21.753406 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.753424 kubelet[2716]: W1212 18:39:21.753418 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.753492 kubelet[2716]: E1212 18:39:21.753474 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.753811 kubelet[2716]: E1212 18:39:21.753794 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.753811 kubelet[2716]: W1212 18:39:21.753807 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.753963 kubelet[2716]: E1212 18:39:21.753908 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.754126 kubelet[2716]: E1212 18:39:21.754110 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.754126 kubelet[2716]: W1212 18:39:21.754122 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.754318 kubelet[2716]: E1212 18:39:21.754243 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.754375 kubelet[2716]: E1212 18:39:21.754358 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.754375 kubelet[2716]: W1212 18:39:21.754371 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.754471 kubelet[2716]: E1212 18:39:21.754453 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.754665 kubelet[2716]: E1212 18:39:21.754648 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.754665 kubelet[2716]: W1212 18:39:21.754660 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.754787 kubelet[2716]: E1212 18:39:21.754746 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.755080 kubelet[2716]: E1212 18:39:21.755062 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.755080 kubelet[2716]: W1212 18:39:21.755075 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.755255 kubelet[2716]: E1212 18:39:21.755173 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.755314 kubelet[2716]: E1212 18:39:21.755297 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.755314 kubelet[2716]: W1212 18:39:21.755310 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.755561 kubelet[2716]: E1212 18:39:21.755420 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.755561 kubelet[2716]: E1212 18:39:21.755544 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.755561 kubelet[2716]: W1212 18:39:21.755555 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.755673 kubelet[2716]: E1212 18:39:21.755642 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.755842 kubelet[2716]: E1212 18:39:21.755826 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.755842 kubelet[2716]: W1212 18:39:21.755838 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.756316 kubelet[2716]: E1212 18:39:21.756292 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.756419 kubelet[2716]: E1212 18:39:21.756402 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.756419 kubelet[2716]: W1212 18:39:21.756414 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.756590 kubelet[2716]: E1212 18:39:21.756514 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.756705 kubelet[2716]: E1212 18:39:21.756688 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.756705 kubelet[2716]: W1212 18:39:21.756701 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.756748 kubelet[2716]: E1212 18:39:21.756718 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.757303 kubelet[2716]: E1212 18:39:21.757247 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.757303 kubelet[2716]: W1212 18:39:21.757265 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.757532 kubelet[2716]: E1212 18:39:21.757401 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.757843 kubelet[2716]: E1212 18:39:21.757831 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.757940 kubelet[2716]: W1212 18:39:21.757903 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.758018 kubelet[2716]: E1212 18:39:21.758006 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.758349 kubelet[2716]: E1212 18:39:21.758325 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.758349 kubelet[2716]: W1212 18:39:21.758342 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.758434 kubelet[2716]: E1212 18:39:21.758353 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:21.765897 kubelet[2716]: E1212 18:39:21.765856 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:21.765897 kubelet[2716]: W1212 18:39:21.765873 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:21.765897 kubelet[2716]: E1212 18:39:21.765885 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:22.522907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623572952.mount: Deactivated successfully. Dec 12 18:39:23.198564 containerd[1549]: time="2025-12-12T18:39:23.198509042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.199551 containerd[1549]: time="2025-12-12T18:39:23.199343673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:39:23.200112 containerd[1549]: time="2025-12-12T18:39:23.200079292Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.201846 containerd[1549]: time="2025-12-12T18:39:23.201818034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.202274 containerd[1549]: time="2025-12-12T18:39:23.202237469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.578205713s" Dec 12 18:39:23.202274 containerd[1549]: time="2025-12-12T18:39:23.202271899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:39:23.203759 containerd[1549]: time="2025-12-12T18:39:23.203723377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:39:23.217910 containerd[1549]: time="2025-12-12T18:39:23.217876774Z" level=info msg="CreateContainer within sandbox \"19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:39:23.223932 containerd[1549]: time="2025-12-12T18:39:23.223656567Z" level=info msg="Container ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:23.228365 containerd[1549]: time="2025-12-12T18:39:23.228337365Z" level=info msg="CreateContainer within sandbox \"19d48262cfc9c776dd0c1a74319a5f33fa96f5fcd0f3c4a0052c22aa8226cddd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75\"" Dec 12 18:39:23.228833 containerd[1549]: time="2025-12-12T18:39:23.228802811Z" level=info msg="StartContainer for \"ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75\"" Dec 12 18:39:23.233056 containerd[1549]: time="2025-12-12T18:39:23.233016204Z" level=info msg="connecting to shim ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75" address="unix:///run/containerd/s/c118f337f7681dc8a6031b246be24b79673639f34322040dcacab02a275a96be" protocol=ttrpc version=3 Dec 12 18:39:23.261067 systemd[1]: Started cri-containerd-ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75.scope - libcontainer container ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75. Dec 12 18:39:23.324021 containerd[1549]: time="2025-12-12T18:39:23.323968701Z" level=info msg="StartContainer for \"ddb9186c289c13e97148014acb69e9ddce9d1e9bb0e0c662078962174be73d75\" returns successfully" Dec 12 18:39:23.412434 kubelet[2716]: E1212 18:39:23.412355 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:23.522856 kubelet[2716]: E1212 18:39:23.521634 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:23.554485 kubelet[2716]: E1212 18:39:23.554358 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.554485 kubelet[2716]: W1212 18:39:23.554378 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.554485 kubelet[2716]: E1212 18:39:23.554399 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.555421 kubelet[2716]: E1212 18:39:23.555280 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.555421 kubelet[2716]: W1212 18:39:23.555295 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.555421 kubelet[2716]: E1212 18:39:23.555312 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.556290 kubelet[2716]: E1212 18:39:23.556264 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.556580 kubelet[2716]: W1212 18:39:23.556506 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.556580 kubelet[2716]: E1212 18:39:23.556525 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.558645 kubelet[2716]: E1212 18:39:23.558601 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.558645 kubelet[2716]: W1212 18:39:23.558614 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.559231 kubelet[2716]: E1212 18:39:23.559107 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.559527 kubelet[2716]: E1212 18:39:23.559514 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.560061 kubelet[2716]: W1212 18:39:23.559980 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.560061 kubelet[2716]: E1212 18:39:23.559995 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.560379 kubelet[2716]: E1212 18:39:23.560320 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.560379 kubelet[2716]: W1212 18:39:23.560331 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.560379 kubelet[2716]: E1212 18:39:23.560340 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.560687 kubelet[2716]: E1212 18:39:23.560625 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.560687 kubelet[2716]: W1212 18:39:23.560636 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.560687 kubelet[2716]: E1212 18:39:23.560644 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.562066 kubelet[2716]: E1212 18:39:23.561997 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.562066 kubelet[2716]: W1212 18:39:23.562009 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.562066 kubelet[2716]: E1212 18:39:23.562018 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.562411 kubelet[2716]: E1212 18:39:23.562354 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.562411 kubelet[2716]: W1212 18:39:23.562365 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.562411 kubelet[2716]: E1212 18:39:23.562374 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.562780 kubelet[2716]: E1212 18:39:23.562722 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.562780 kubelet[2716]: W1212 18:39:23.562734 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.562780 kubelet[2716]: E1212 18:39:23.562742 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.563713 kubelet[2716]: E1212 18:39:23.563631 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.563713 kubelet[2716]: W1212 18:39:23.563642 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.563713 kubelet[2716]: E1212 18:39:23.563652 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.564165 kubelet[2716]: E1212 18:39:23.564154 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.564260 kubelet[2716]: W1212 18:39:23.564204 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.564260 kubelet[2716]: E1212 18:39:23.564216 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.564588 kubelet[2716]: E1212 18:39:23.564527 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.564588 kubelet[2716]: W1212 18:39:23.564537 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.564588 kubelet[2716]: E1212 18:39:23.564545 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.564868 kubelet[2716]: E1212 18:39:23.564808 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.564868 kubelet[2716]: W1212 18:39:23.564818 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.564868 kubelet[2716]: E1212 18:39:23.564826 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.566152 kubelet[2716]: E1212 18:39:23.566082 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.566152 kubelet[2716]: W1212 18:39:23.566094 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.566152 kubelet[2716]: E1212 18:39:23.566104 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.568237 kubelet[2716]: E1212 18:39:23.568198 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.568237 kubelet[2716]: W1212 18:39:23.568208 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.568237 kubelet[2716]: E1212 18:39:23.568221 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.568665 kubelet[2716]: E1212 18:39:23.568639 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.568665 kubelet[2716]: W1212 18:39:23.568651 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.568794 kubelet[2716]: E1212 18:39:23.568736 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.569248 kubelet[2716]: E1212 18:39:23.569237 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.569383 kubelet[2716]: W1212 18:39:23.569302 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.569383 kubelet[2716]: E1212 18:39:23.569320 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.570039 kubelet[2716]: E1212 18:39:23.569954 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.570039 kubelet[2716]: W1212 18:39:23.569965 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.570780 kubelet[2716]: E1212 18:39:23.570765 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.570930 kubelet[2716]: W1212 18:39:23.570857 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.570990 kubelet[2716]: E1212 18:39:23.570830 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.571051 kubelet[2716]: E1212 18:39:23.571040 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.571279 kubelet[2716]: E1212 18:39:23.571253 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.571279 kubelet[2716]: W1212 18:39:23.571263 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.571404 kubelet[2716]: E1212 18:39:23.571352 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.571628 kubelet[2716]: E1212 18:39:23.571605 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.571628 kubelet[2716]: W1212 18:39:23.571615 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.571788 kubelet[2716]: E1212 18:39:23.571701 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.572055 kubelet[2716]: E1212 18:39:23.572043 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.572134 kubelet[2716]: W1212 18:39:23.572108 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.572647 kubelet[2716]: E1212 18:39:23.572612 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.572745 kubelet[2716]: E1212 18:39:23.572734 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.572798 kubelet[2716]: W1212 18:39:23.572788 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.572903 kubelet[2716]: E1212 18:39:23.572849 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.573131 kubelet[2716]: E1212 18:39:23.573120 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.573239 kubelet[2716]: W1212 18:39:23.573183 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.573391 kubelet[2716]: E1212 18:39:23.573368 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.573639 kubelet[2716]: E1212 18:39:23.573602 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.573639 kubelet[2716]: W1212 18:39:23.573613 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.573838 kubelet[2716]: E1212 18:39:23.573820 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.574049 kubelet[2716]: E1212 18:39:23.574026 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.574049 kubelet[2716]: W1212 18:39:23.574037 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.574317 kubelet[2716]: E1212 18:39:23.574238 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.575182 kubelet[2716]: E1212 18:39:23.575169 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.575439 kubelet[2716]: W1212 18:39:23.575422 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.575682 kubelet[2716]: E1212 18:39:23.575478 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.576081 kubelet[2716]: E1212 18:39:23.576033 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.576081 kubelet[2716]: W1212 18:39:23.576067 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.576354 kubelet[2716]: E1212 18:39:23.576314 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.577298 kubelet[2716]: E1212 18:39:23.577271 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.577298 kubelet[2716]: W1212 18:39:23.577284 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.577469 kubelet[2716]: E1212 18:39:23.577440 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.577883 kubelet[2716]: E1212 18:39:23.577664 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.577883 kubelet[2716]: W1212 18:39:23.577676 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.577987 kubelet[2716]: E1212 18:39:23.577975 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.578255 kubelet[2716]: E1212 18:39:23.578244 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.578310 kubelet[2716]: W1212 18:39:23.578300 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.578357 kubelet[2716]: E1212 18:39:23.578347 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.579075 kubelet[2716]: E1212 18:39:23.579064 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:39:23.579157 kubelet[2716]: W1212 18:39:23.579125 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:39:23.579157 kubelet[2716]: E1212 18:39:23.579136 2716 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:39:23.819069 containerd[1549]: time="2025-12-12T18:39:23.818474185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.821108 containerd[1549]: time="2025-12-12T18:39:23.821043847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:39:23.822880 containerd[1549]: time="2025-12-12T18:39:23.822806719Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.828520 containerd[1549]: time="2025-12-12T18:39:23.828431830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:23.830879 containerd[1549]: time="2025-12-12T18:39:23.830816869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 627.037901ms" Dec 12 18:39:23.830879 containerd[1549]: time="2025-12-12T18:39:23.830850110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:39:23.835371 containerd[1549]: time="2025-12-12T18:39:23.835314256Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:39:23.862130 containerd[1549]: time="2025-12-12T18:39:23.861802706Z" level=info msg="Container 43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:23.875513 containerd[1549]: time="2025-12-12T18:39:23.875481288Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063\"" Dec 12 18:39:23.877184 containerd[1549]: time="2025-12-12T18:39:23.877143528Z" level=info msg="StartContainer for \"43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063\"" Dec 12 18:39:23.881575 containerd[1549]: time="2025-12-12T18:39:23.881544964Z" level=info msg="connecting to shim 43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063" address="unix:///run/containerd/s/3cf40bd46f09a22458f4a28625f8554aec1cbc1aba3c06381804c04527e6148d" protocol=ttrpc version=3 Dec 12 18:39:23.921346 systemd[1]: Started cri-containerd-43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063.scope - libcontainer container 43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063. Dec 12 18:39:24.034960 containerd[1549]: time="2025-12-12T18:39:24.034754033Z" level=info msg="StartContainer for \"43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063\" returns successfully" Dec 12 18:39:24.067421 systemd[1]: cri-containerd-43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063.scope: Deactivated successfully. Dec 12 18:39:24.073017 containerd[1549]: time="2025-12-12T18:39:24.072898840Z" level=info msg="received container exit event container_id:\"43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063\" id:\"43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063\" pid:3378 exited_at:{seconds:1765564764 nanos:72337624}" Dec 12 18:39:24.107195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e517b2b1bcf6835b6d4b274820dabb33c2b2145e78ef1b780d7c8563b6f063-rootfs.mount: Deactivated successfully. Dec 12 18:39:24.524616 kubelet[2716]: I1212 18:39:24.524585 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:39:24.525178 kubelet[2716]: E1212 18:39:24.524939 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:24.525219 kubelet[2716]: E1212 18:39:24.525183 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:24.527022 containerd[1549]: time="2025-12-12T18:39:24.526970964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:39:24.544096 kubelet[2716]: I1212 18:39:24.544012 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-675cfdb95b-wx2df" podStartSLOduration=1.9641477090000001 podStartE2EDuration="3.543996533s" podCreationTimestamp="2025-12-12 18:39:21 +0000 UTC" firstStartedPulling="2025-12-12 18:39:21.623226015 +0000 UTC m=+20.339329598" lastFinishedPulling="2025-12-12 18:39:23.203074839 +0000 UTC m=+21.919178422" observedRunningTime="2025-12-12 18:39:23.535977812 +0000 UTC m=+22.252081395" watchObservedRunningTime="2025-12-12 18:39:24.543996533 +0000 UTC m=+23.260100126" Dec 12 18:39:25.411417 kubelet[2716]: E1212 18:39:25.411097 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:26.208390 containerd[1549]: time="2025-12-12T18:39:26.208342230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:26.209288 containerd[1549]: time="2025-12-12T18:39:26.209069008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:39:26.209773 containerd[1549]: time="2025-12-12T18:39:26.209744935Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:26.211224 containerd[1549]: time="2025-12-12T18:39:26.211198370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:26.211992 containerd[1549]: time="2025-12-12T18:39:26.211968467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.684962393s" Dec 12 18:39:26.212069 containerd[1549]: time="2025-12-12T18:39:26.212056078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:39:26.214995 containerd[1549]: time="2025-12-12T18:39:26.214443133Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:39:26.224118 containerd[1549]: time="2025-12-12T18:39:26.224000961Z" level=info msg="Container 804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:26.240460 containerd[1549]: time="2025-12-12T18:39:26.240414161Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd\"" Dec 12 18:39:26.240996 containerd[1549]: time="2025-12-12T18:39:26.240973306Z" level=info msg="StartContainer for \"804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd\"" Dec 12 18:39:26.242628 containerd[1549]: time="2025-12-12T18:39:26.242593683Z" level=info msg="connecting to shim 804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd" address="unix:///run/containerd/s/3cf40bd46f09a22458f4a28625f8554aec1cbc1aba3c06381804c04527e6148d" protocol=ttrpc version=3 Dec 12 18:39:26.269297 systemd[1]: Started cri-containerd-804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd.scope - libcontainer container 804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd. Dec 12 18:39:26.343349 containerd[1549]: time="2025-12-12T18:39:26.343313051Z" level=info msg="StartContainer for \"804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd\" returns successfully" Dec 12 18:39:26.535962 kubelet[2716]: E1212 18:39:26.535705 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:26.915208 systemd[1]: cri-containerd-804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd.scope: Deactivated successfully. Dec 12 18:39:26.916060 systemd[1]: cri-containerd-804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd.scope: Consumed 560ms CPU time, 199.7M memory peak, 171.3M written to disk. Dec 12 18:39:26.920301 containerd[1549]: time="2025-12-12T18:39:26.920246336Z" level=info msg="received container exit event container_id:\"804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd\" id:\"804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd\" pid:3437 exited_at:{seconds:1765564766 nanos:919641949}" Dec 12 18:39:26.956057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-804558be8eff686c449d26714f376525b5ba7c09b1170f29d2d566c47fb86fbd-rootfs.mount: Deactivated successfully. Dec 12 18:39:26.976659 kubelet[2716]: I1212 18:39:26.976578 2716 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:39:27.047277 systemd[1]: Created slice kubepods-besteffort-podae6c3cf7_869a_4f7d_9246_68e40e088fa1.slice - libcontainer container kubepods-besteffort-podae6c3cf7_869a_4f7d_9246_68e40e088fa1.slice. Dec 12 18:39:27.069970 systemd[1]: Created slice kubepods-besteffort-podb00bfe6d_78a3_4353_8888_bffa785c4bed.slice - libcontainer container kubepods-besteffort-podb00bfe6d_78a3_4353_8888_bffa785c4bed.slice. Dec 12 18:39:27.083124 systemd[1]: Created slice kubepods-burstable-pod0b392157_ffe0_4a43_aecf_43a2cfd0c8c5.slice - libcontainer container kubepods-burstable-pod0b392157_ffe0_4a43_aecf_43a2cfd0c8c5.slice. Dec 12 18:39:27.096549 systemd[1]: Created slice kubepods-besteffort-podd8ce7370_ee37_4b30_a101_cbc03d0825dd.slice - libcontainer container kubepods-besteffort-podd8ce7370_ee37_4b30_a101_cbc03d0825dd.slice. Dec 12 18:39:27.108749 systemd[1]: Created slice kubepods-besteffort-pod80609a1f_fd0a_4b4b_a327_8d66d4e6cb54.slice - libcontainer container kubepods-besteffort-pod80609a1f_fd0a_4b4b_a327_8d66d4e6cb54.slice. Dec 12 18:39:27.118370 systemd[1]: Created slice kubepods-besteffort-podca927433_737e_4f41_bcbf_8431c7f3c6dc.slice - libcontainer container kubepods-besteffort-podca927433_737e_4f41_bcbf_8431c7f3c6dc.slice. Dec 12 18:39:27.128798 systemd[1]: Created slice kubepods-burstable-podecab0b73_4d36_4385_baa9_f461d5459b0d.slice - libcontainer container kubepods-burstable-podecab0b73_4d36_4385_baa9_f461d5459b0d.slice. Dec 12 18:39:27.196728 kubelet[2716]: I1212 18:39:27.196632 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6rg\" (UniqueName: \"kubernetes.io/projected/ca927433-737e-4f41-bcbf-8431c7f3c6dc-kube-api-access-xm6rg\") pod \"calico-apiserver-6d5b588865-bth42\" (UID: \"ca927433-737e-4f41-bcbf-8431c7f3c6dc\") " pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" Dec 12 18:39:27.197040 kubelet[2716]: I1212 18:39:27.197018 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzlvt\" (UniqueName: \"kubernetes.io/projected/ecab0b73-4d36-4385-baa9-f461d5459b0d-kube-api-access-xzlvt\") pod \"coredns-668d6bf9bc-z92n5\" (UID: \"ecab0b73-4d36-4385-baa9-f461d5459b0d\") " pod="kube-system/coredns-668d6bf9bc-z92n5" Dec 12 18:39:27.197383 kubelet[2716]: I1212 18:39:27.197361 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59jwp\" (UniqueName: \"kubernetes.io/projected/d8ce7370-ee37-4b30-a101-cbc03d0825dd-kube-api-access-59jwp\") pod \"calico-apiserver-6d5b588865-cwzws\" (UID: \"d8ce7370-ee37-4b30-a101-cbc03d0825dd\") " pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" Dec 12 18:39:27.197503 kubelet[2716]: I1212 18:39:27.197483 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-ca-bundle\") pod \"whisker-56fc7ffbc7-6mttn\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " pod="calico-system/whisker-56fc7ffbc7-6mttn" Dec 12 18:39:27.197623 kubelet[2716]: I1212 18:39:27.197606 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca927433-737e-4f41-bcbf-8431c7f3c6dc-calico-apiserver-certs\") pod \"calico-apiserver-6d5b588865-bth42\" (UID: \"ca927433-737e-4f41-bcbf-8431c7f3c6dc\") " pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" Dec 12 18:39:27.197732 kubelet[2716]: I1212 18:39:27.197714 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8ce7370-ee37-4b30-a101-cbc03d0825dd-calico-apiserver-certs\") pod \"calico-apiserver-6d5b588865-cwzws\" (UID: \"d8ce7370-ee37-4b30-a101-cbc03d0825dd\") " pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" Dec 12 18:39:27.197837 kubelet[2716]: I1212 18:39:27.197818 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b00bfe6d-78a3-4353-8888-bffa785c4bed-tigera-ca-bundle\") pod \"calico-kube-controllers-7854cf6c79-2dk2d\" (UID: \"b00bfe6d-78a3-4353-8888-bffa785c4bed\") " pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" Dec 12 18:39:27.197947 kubelet[2716]: I1212 18:39:27.197930 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8lrr\" (UniqueName: \"kubernetes.io/projected/0b392157-ffe0-4a43-aecf-43a2cfd0c8c5-kube-api-access-l8lrr\") pod \"coredns-668d6bf9bc-69hbg\" (UID: \"0b392157-ffe0-4a43-aecf-43a2cfd0c8c5\") " pod="kube-system/coredns-668d6bf9bc-69hbg" Dec 12 18:39:27.198085 kubelet[2716]: I1212 18:39:27.198034 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab0b73-4d36-4385-baa9-f461d5459b0d-config-volume\") pod \"coredns-668d6bf9bc-z92n5\" (UID: \"ecab0b73-4d36-4385-baa9-f461d5459b0d\") " pod="kube-system/coredns-668d6bf9bc-z92n5" Dec 12 18:39:27.198239 kubelet[2716]: I1212 18:39:27.198185 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80609a1f-fd0a-4b4b-a327-8d66d4e6cb54-config\") pod \"goldmane-666569f655-n6lhs\" (UID: \"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54\") " pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.198370 kubelet[2716]: I1212 18:39:27.198321 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-backend-key-pair\") pod \"whisker-56fc7ffbc7-6mttn\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " pod="calico-system/whisker-56fc7ffbc7-6mttn" Dec 12 18:39:27.198484 kubelet[2716]: I1212 18:39:27.198463 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m7fx\" (UniqueName: \"kubernetes.io/projected/b00bfe6d-78a3-4353-8888-bffa785c4bed-kube-api-access-7m7fx\") pod \"calico-kube-controllers-7854cf6c79-2dk2d\" (UID: \"b00bfe6d-78a3-4353-8888-bffa785c4bed\") " pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" Dec 12 18:39:27.198660 kubelet[2716]: I1212 18:39:27.198613 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80609a1f-fd0a-4b4b-a327-8d66d4e6cb54-goldmane-ca-bundle\") pod \"goldmane-666569f655-n6lhs\" (UID: \"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54\") " pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.198727 kubelet[2716]: I1212 18:39:27.198705 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b392157-ffe0-4a43-aecf-43a2cfd0c8c5-config-volume\") pod \"coredns-668d6bf9bc-69hbg\" (UID: \"0b392157-ffe0-4a43-aecf-43a2cfd0c8c5\") " pod="kube-system/coredns-668d6bf9bc-69hbg" Dec 12 18:39:27.198807 kubelet[2716]: I1212 18:39:27.198737 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/80609a1f-fd0a-4b4b-a327-8d66d4e6cb54-goldmane-key-pair\") pod \"goldmane-666569f655-n6lhs\" (UID: \"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54\") " pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.198867 kubelet[2716]: I1212 18:39:27.198811 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bpw\" (UniqueName: \"kubernetes.io/projected/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-kube-api-access-24bpw\") pod \"whisker-56fc7ffbc7-6mttn\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " pod="calico-system/whisker-56fc7ffbc7-6mttn" Dec 12 18:39:27.198939 kubelet[2716]: I1212 18:39:27.198877 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dns7j\" (UniqueName: \"kubernetes.io/projected/80609a1f-fd0a-4b4b-a327-8d66d4e6cb54-kube-api-access-dns7j\") pod \"goldmane-666569f655-n6lhs\" (UID: \"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54\") " pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.378886 containerd[1549]: time="2025-12-12T18:39:27.378814887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854cf6c79-2dk2d,Uid:b00bfe6d-78a3-4353-8888-bffa785c4bed,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:27.391184 kubelet[2716]: E1212 18:39:27.391143 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:27.394046 containerd[1549]: time="2025-12-12T18:39:27.393997754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69hbg,Uid:0b392157-ffe0-4a43-aecf-43a2cfd0c8c5,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:27.403846 containerd[1549]: time="2025-12-12T18:39:27.403058991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-cwzws,Uid:d8ce7370-ee37-4b30-a101-cbc03d0825dd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:27.417444 containerd[1549]: time="2025-12-12T18:39:27.417309639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n6lhs,Uid:80609a1f-fd0a-4b4b-a327-8d66d4e6cb54,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:27.424252 containerd[1549]: time="2025-12-12T18:39:27.424146125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-bth42,Uid:ca927433-737e-4f41-bcbf-8431c7f3c6dc,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:27.434524 systemd[1]: Created slice kubepods-besteffort-pod3e77788b_998e_4510_9ab0_47ab12a2af9d.slice - libcontainer container kubepods-besteffort-pod3e77788b_998e_4510_9ab0_47ab12a2af9d.slice. Dec 12 18:39:27.437123 kubelet[2716]: E1212 18:39:27.435394 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:27.437233 containerd[1549]: time="2025-12-12T18:39:27.436433344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z92n5,Uid:ecab0b73-4d36-4385-baa9-f461d5459b0d,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:27.442713 containerd[1549]: time="2025-12-12T18:39:27.442604453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht9b9,Uid:3e77788b-998e-4510-9ab0-47ab12a2af9d,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:27.562673 kubelet[2716]: E1212 18:39:27.562253 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:27.564025 containerd[1549]: time="2025-12-12T18:39:27.563986296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:39:27.603178 containerd[1549]: time="2025-12-12T18:39:27.603124574Z" level=error msg="Failed to destroy network for sandbox \"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.618057 containerd[1549]: time="2025-12-12T18:39:27.618008368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854cf6c79-2dk2d,Uid:b00bfe6d-78a3-4353-8888-bffa785c4bed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.622450 containerd[1549]: time="2025-12-12T18:39:27.622321160Z" level=error msg="Failed to destroy network for sandbox \"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.622769 kubelet[2716]: E1212 18:39:27.622586 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.622769 kubelet[2716]: E1212 18:39:27.622696 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" Dec 12 18:39:27.622769 kubelet[2716]: E1212 18:39:27.622730 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" Dec 12 18:39:27.623541 kubelet[2716]: E1212 18:39:27.622822 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43841dcf619eece72014e2d9cc4cb42769486f61a10844b7faec1f44538a20b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:39:27.626129 containerd[1549]: time="2025-12-12T18:39:27.625810183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n6lhs,Uid:80609a1f-fd0a-4b4b-a327-8d66d4e6cb54,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.626250 kubelet[2716]: E1212 18:39:27.626182 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.626250 kubelet[2716]: E1212 18:39:27.626211 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.626250 kubelet[2716]: E1212 18:39:27.626226 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n6lhs" Dec 12 18:39:27.626326 kubelet[2716]: E1212 18:39:27.626251 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b54ff0efc2d56cb073e24607ecc2712e5d294eab601d1fa94cbcaacbf9c6f87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:39:27.653420 containerd[1549]: time="2025-12-12T18:39:27.653372860Z" level=error msg="Failed to destroy network for sandbox \"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.655272 containerd[1549]: time="2025-12-12T18:39:27.655156547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht9b9,Uid:3e77788b-998e-4510-9ab0-47ab12a2af9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.655598 kubelet[2716]: E1212 18:39:27.655558 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.655659 kubelet[2716]: E1212 18:39:27.655606 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:27.655659 kubelet[2716]: E1212 18:39:27.655624 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ht9b9" Dec 12 18:39:27.655717 kubelet[2716]: E1212 18:39:27.655651 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f26322679f115b79ff2aab8b7712440757b0074edd568f14f50029b2e8365e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:27.656013 containerd[1549]: time="2025-12-12T18:39:27.655881264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56fc7ffbc7-6mttn,Uid:ae6c3cf7-869a-4f7d-9246-68e40e088fa1,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:27.660445 containerd[1549]: time="2025-12-12T18:39:27.660419568Z" level=error msg="Failed to destroy network for sandbox \"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.662422 containerd[1549]: time="2025-12-12T18:39:27.662340566Z" level=error msg="Failed to destroy network for sandbox \"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.664320 containerd[1549]: time="2025-12-12T18:39:27.664292255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-cwzws,Uid:d8ce7370-ee37-4b30-a101-cbc03d0825dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.665139 kubelet[2716]: E1212 18:39:27.665079 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.665251 kubelet[2716]: E1212 18:39:27.665223 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" Dec 12 18:39:27.665385 kubelet[2716]: E1212 18:39:27.665248 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" Dec 12 18:39:27.665511 kubelet[2716]: E1212 18:39:27.665439 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d4a76a916839acdc9d73963c04fffb322af81e78fa52ae17d1723b8aed7d158\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:39:27.666114 containerd[1549]: time="2025-12-12T18:39:27.666046072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z92n5,Uid:ecab0b73-4d36-4385-baa9-f461d5459b0d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.666521 kubelet[2716]: E1212 18:39:27.666459 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.666633 kubelet[2716]: E1212 18:39:27.666605 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z92n5" Dec 12 18:39:27.666633 kubelet[2716]: E1212 18:39:27.666632 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z92n5" Dec 12 18:39:27.666838 kubelet[2716]: E1212 18:39:27.666785 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z92n5_kube-system(ecab0b73-4d36-4385-baa9-f461d5459b0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z92n5_kube-system(ecab0b73-4d36-4385-baa9-f461d5459b0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b2824c12e43d4d88e31b80079d2d7131b8aacbbb48c48ff4d287a79c28c0184\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z92n5" podUID="ecab0b73-4d36-4385-baa9-f461d5459b0d" Dec 12 18:39:27.675126 containerd[1549]: time="2025-12-12T18:39:27.674878237Z" level=error msg="Failed to destroy network for sandbox \"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.676128 containerd[1549]: time="2025-12-12T18:39:27.676067999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69hbg,Uid:0b392157-ffe0-4a43-aecf-43a2cfd0c8c5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.676319 kubelet[2716]: E1212 18:39:27.676290 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.676377 kubelet[2716]: E1212 18:39:27.676325 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-69hbg" Dec 12 18:39:27.676410 kubelet[2716]: E1212 18:39:27.676379 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-69hbg" Dec 12 18:39:27.676572 kubelet[2716]: E1212 18:39:27.676491 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-69hbg_kube-system(0b392157-ffe0-4a43-aecf-43a2cfd0c8c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-69hbg_kube-system(0b392157-ffe0-4a43-aecf-43a2cfd0c8c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd34a04f4e23f5b8024004dd4e793bcff46b5a5ce7d1086dda6c7a1b6b830f34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-69hbg" podUID="0b392157-ffe0-4a43-aecf-43a2cfd0c8c5" Dec 12 18:39:27.698652 containerd[1549]: time="2025-12-12T18:39:27.698470565Z" level=error msg="Failed to destroy network for sandbox \"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.699527 containerd[1549]: time="2025-12-12T18:39:27.699490455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-bth42,Uid:ca927433-737e-4f41-bcbf-8431c7f3c6dc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.699680 kubelet[2716]: E1212 18:39:27.699643 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.699727 kubelet[2716]: E1212 18:39:27.699694 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" Dec 12 18:39:27.699727 kubelet[2716]: E1212 18:39:27.699712 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" Dec 12 18:39:27.699771 kubelet[2716]: E1212 18:39:27.699741 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6166a48b3e8d53624c984f747034d98c6ca251fb96a22db8a6d60fa628a2102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:39:27.740900 containerd[1549]: time="2025-12-12T18:39:27.740824714Z" level=error msg="Failed to destroy network for sandbox \"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.742082 containerd[1549]: time="2025-12-12T18:39:27.742033266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56fc7ffbc7-6mttn,Uid:ae6c3cf7-869a-4f7d-9246-68e40e088fa1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.742477 kubelet[2716]: E1212 18:39:27.742409 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:39:27.742555 kubelet[2716]: E1212 18:39:27.742522 2716 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56fc7ffbc7-6mttn" Dec 12 18:39:27.742611 kubelet[2716]: E1212 18:39:27.742551 2716 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56fc7ffbc7-6mttn" Dec 12 18:39:27.743341 kubelet[2716]: E1212 18:39:27.742634 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56fc7ffbc7-6mttn_calico-system(ae6c3cf7-869a-4f7d-9246-68e40e088fa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56fc7ffbc7-6mttn_calico-system(ae6c3cf7-869a-4f7d-9246-68e40e088fa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1b5c3a58b390b659308c3aa24082894bc8601d002bc53686f55079e95fb17b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56fc7ffbc7-6mttn" podUID="ae6c3cf7-869a-4f7d-9246-68e40e088fa1" Dec 12 18:39:31.096139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281251881.mount: Deactivated successfully. Dec 12 18:39:31.127149 containerd[1549]: time="2025-12-12T18:39:31.127050724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.127982 containerd[1549]: time="2025-12-12T18:39:31.127824720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:39:31.128521 containerd[1549]: time="2025-12-12T18:39:31.128480815Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.130124 containerd[1549]: time="2025-12-12T18:39:31.130066287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:31.130869 containerd[1549]: time="2025-12-12T18:39:31.130567891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.566404983s" Dec 12 18:39:31.130869 containerd[1549]: time="2025-12-12T18:39:31.130622881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:39:31.148980 containerd[1549]: time="2025-12-12T18:39:31.148888197Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:39:31.168033 containerd[1549]: time="2025-12-12T18:39:31.167401116Z" level=info msg="Container a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:31.173966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922486117.mount: Deactivated successfully. Dec 12 18:39:31.179400 containerd[1549]: time="2025-12-12T18:39:31.179368515Z" level=info msg="CreateContainer within sandbox \"e4b7c866eec02e041a9bdf9a754c731ff273fefad3ef3535a9425e93a233c080\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f\"" Dec 12 18:39:31.180384 containerd[1549]: time="2025-12-12T18:39:31.180364852Z" level=info msg="StartContainer for \"a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f\"" Dec 12 18:39:31.181834 containerd[1549]: time="2025-12-12T18:39:31.181813253Z" level=info msg="connecting to shim a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f" address="unix:///run/containerd/s/3cf40bd46f09a22458f4a28625f8554aec1cbc1aba3c06381804c04527e6148d" protocol=ttrpc version=3 Dec 12 18:39:31.227061 systemd[1]: Started cri-containerd-a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f.scope - libcontainer container a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f. Dec 12 18:39:31.323275 containerd[1549]: time="2025-12-12T18:39:31.323229978Z" level=info msg="StartContainer for \"a5a1c2f8623fe865af588c2dbb417453fc9a9e14c666e760502fee59bc9d9c4f\" returns successfully" Dec 12 18:39:31.409764 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:39:31.411730 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:39:31.577071 kubelet[2716]: E1212 18:39:31.576513 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:31.597774 kubelet[2716]: I1212 18:39:31.596812 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cqrh4" podStartSLOduration=1.174561302 podStartE2EDuration="10.59679066s" podCreationTimestamp="2025-12-12 18:39:21 +0000 UTC" firstStartedPulling="2025-12-12 18:39:21.70934973 +0000 UTC m=+20.425453323" lastFinishedPulling="2025-12-12 18:39:31.131579098 +0000 UTC m=+29.847682681" observedRunningTime="2025-12-12 18:39:31.594372672 +0000 UTC m=+30.310476255" watchObservedRunningTime="2025-12-12 18:39:31.59679066 +0000 UTC m=+30.312894243" Dec 12 18:39:31.635162 kubelet[2716]: I1212 18:39:31.635110 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-ca-bundle\") pod \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " Dec 12 18:39:31.635162 kubelet[2716]: I1212 18:39:31.635160 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-backend-key-pair\") pod \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " Dec 12 18:39:31.635386 kubelet[2716]: I1212 18:39:31.635181 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24bpw\" (UniqueName: \"kubernetes.io/projected/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-kube-api-access-24bpw\") pod \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\" (UID: \"ae6c3cf7-869a-4f7d-9246-68e40e088fa1\") " Dec 12 18:39:31.638663 kubelet[2716]: I1212 18:39:31.638534 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ae6c3cf7-869a-4f7d-9246-68e40e088fa1" (UID: "ae6c3cf7-869a-4f7d-9246-68e40e088fa1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:39:31.643904 kubelet[2716]: I1212 18:39:31.643868 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ae6c3cf7-869a-4f7d-9246-68e40e088fa1" (UID: "ae6c3cf7-869a-4f7d-9246-68e40e088fa1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:39:31.644029 kubelet[2716]: I1212 18:39:31.644003 2716 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-kube-api-access-24bpw" (OuterVolumeSpecName: "kube-api-access-24bpw") pod "ae6c3cf7-869a-4f7d-9246-68e40e088fa1" (UID: "ae6c3cf7-869a-4f7d-9246-68e40e088fa1"). InnerVolumeSpecName "kube-api-access-24bpw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:39:31.736125 kubelet[2716]: I1212 18:39:31.736074 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-backend-key-pair\") on node \"172-237-133-204\" DevicePath \"\"" Dec 12 18:39:31.736125 kubelet[2716]: I1212 18:39:31.736109 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24bpw\" (UniqueName: \"kubernetes.io/projected/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-kube-api-access-24bpw\") on node \"172-237-133-204\" DevicePath \"\"" Dec 12 18:39:31.736125 kubelet[2716]: I1212 18:39:31.736122 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6c3cf7-869a-4f7d-9246-68e40e088fa1-whisker-ca-bundle\") on node \"172-237-133-204\" DevicePath \"\"" Dec 12 18:39:31.884301 systemd[1]: Removed slice kubepods-besteffort-podae6c3cf7_869a_4f7d_9246_68e40e088fa1.slice - libcontainer container kubepods-besteffort-podae6c3cf7_869a_4f7d_9246_68e40e088fa1.slice. Dec 12 18:39:31.947732 systemd[1]: Created slice kubepods-besteffort-pod6d633c24_655b_48ca_8fdb_e7be6b544554.slice - libcontainer container kubepods-besteffort-pod6d633c24_655b_48ca_8fdb_e7be6b544554.slice. Dec 12 18:39:32.038465 kubelet[2716]: I1212 18:39:32.038335 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d633c24-655b-48ca-8fdb-e7be6b544554-whisker-backend-key-pair\") pod \"whisker-7ff5dd56b5-bnjr8\" (UID: \"6d633c24-655b-48ca-8fdb-e7be6b544554\") " pod="calico-system/whisker-7ff5dd56b5-bnjr8" Dec 12 18:39:32.038754 kubelet[2716]: I1212 18:39:32.038561 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d633c24-655b-48ca-8fdb-e7be6b544554-whisker-ca-bundle\") pod \"whisker-7ff5dd56b5-bnjr8\" (UID: \"6d633c24-655b-48ca-8fdb-e7be6b544554\") " pod="calico-system/whisker-7ff5dd56b5-bnjr8" Dec 12 18:39:32.038844 kubelet[2716]: I1212 18:39:32.038592 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf82w\" (UniqueName: \"kubernetes.io/projected/6d633c24-655b-48ca-8fdb-e7be6b544554-kube-api-access-lf82w\") pod \"whisker-7ff5dd56b5-bnjr8\" (UID: \"6d633c24-655b-48ca-8fdb-e7be6b544554\") " pod="calico-system/whisker-7ff5dd56b5-bnjr8" Dec 12 18:39:32.100470 systemd[1]: var-lib-kubelet-pods-ae6c3cf7\x2d869a\x2d4f7d\x2d9246\x2d68e40e088fa1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24bpw.mount: Deactivated successfully. Dec 12 18:39:32.100589 systemd[1]: var-lib-kubelet-pods-ae6c3cf7\x2d869a\x2d4f7d\x2d9246\x2d68e40e088fa1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:39:32.254802 containerd[1549]: time="2025-12-12T18:39:32.254584469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff5dd56b5-bnjr8,Uid:6d633c24-655b-48ca-8fdb-e7be6b544554,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:32.436178 systemd-networkd[1451]: cali7fc58f17db3: Link UP Dec 12 18:39:32.437002 systemd-networkd[1451]: cali7fc58f17db3: Gained carrier Dec 12 18:39:32.452887 containerd[1549]: 2025-12-12 18:39:32.312 [INFO][3806] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:32.452887 containerd[1549]: 2025-12-12 18:39:32.361 [INFO][3806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0 whisker-7ff5dd56b5- calico-system 6d633c24-655b-48ca-8fdb-e7be6b544554 876 0 2025-12-12 18:39:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7ff5dd56b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-133-204 whisker-7ff5dd56b5-bnjr8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7fc58f17db3 [] [] }} ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-" Dec 12 18:39:32.452887 containerd[1549]: 2025-12-12 18:39:32.361 [INFO][3806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.452887 containerd[1549]: 2025-12-12 18:39:32.388 [INFO][3817] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" HandleID="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Workload="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.389 [INFO][3817] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" HandleID="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Workload="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-204", "pod":"whisker-7ff5dd56b5-bnjr8", "timestamp":"2025-12-12 18:39:32.388775068 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.389 [INFO][3817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.389 [INFO][3817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.389 [INFO][3817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.396 [INFO][3817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" host="172-237-133-204" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.401 [INFO][3817] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.404 [INFO][3817] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.407 [INFO][3817] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.408 [INFO][3817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:32.453090 containerd[1549]: 2025-12-12 18:39:32.408 [INFO][3817] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" host="172-237-133-204" Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.411 [INFO][3817] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.414 [INFO][3817] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" host="172-237-133-204" Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.420 [INFO][3817] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.65/26] block=192.168.22.64/26 handle="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" host="172-237-133-204" Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.420 [INFO][3817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.65/26] handle="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" host="172-237-133-204" Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.420 [INFO][3817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:32.453304 containerd[1549]: 2025-12-12 18:39:32.420 [INFO][3817] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.65/26] IPv6=[] ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" HandleID="k8s-pod-network.4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Workload="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.453423 containerd[1549]: 2025-12-12 18:39:32.424 [INFO][3806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0", GenerateName:"whisker-7ff5dd56b5-", Namespace:"calico-system", SelfLink:"", UID:"6d633c24-655b-48ca-8fdb-e7be6b544554", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff5dd56b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"whisker-7ff5dd56b5-bnjr8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7fc58f17db3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:32.453423 containerd[1549]: 2025-12-12 18:39:32.424 [INFO][3806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.65/32] ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.453493 containerd[1549]: 2025-12-12 18:39:32.424 [INFO][3806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fc58f17db3 ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.453493 containerd[1549]: 2025-12-12 18:39:32.437 [INFO][3806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.453535 containerd[1549]: 2025-12-12 18:39:32.437 [INFO][3806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0", GenerateName:"whisker-7ff5dd56b5-", Namespace:"calico-system", SelfLink:"", UID:"6d633c24-655b-48ca-8fdb-e7be6b544554", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff5dd56b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e", Pod:"whisker-7ff5dd56b5-bnjr8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7fc58f17db3", MAC:"3e:0b:d1:3a:48:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:32.453591 containerd[1549]: 2025-12-12 18:39:32.450 [INFO][3806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" Namespace="calico-system" Pod="whisker-7ff5dd56b5-bnjr8" WorkloadEndpoint="172--237--133--204-k8s-whisker--7ff5dd56b5--bnjr8-eth0" Dec 12 18:39:32.492293 containerd[1549]: time="2025-12-12T18:39:32.492206342Z" level=info msg="connecting to shim 4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e" address="unix:///run/containerd/s/5e9ff1b5e13686e0cab5258399771202e683bbda2fae28565d05597584362b3a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:32.522070 systemd[1]: Started cri-containerd-4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e.scope - libcontainer container 4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e. Dec 12 18:39:32.582617 kubelet[2716]: E1212 18:39:32.582568 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:32.594696 containerd[1549]: time="2025-12-12T18:39:32.594600898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff5dd56b5-bnjr8,Uid:6d633c24-655b-48ca-8fdb-e7be6b544554,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a5a54848045ef54bc2452c45b3ae887af803036c4345cfc0d2581a1d197b16e\"" Dec 12 18:39:32.597961 containerd[1549]: time="2025-12-12T18:39:32.597536589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:39:32.729481 containerd[1549]: time="2025-12-12T18:39:32.729438921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:32.730983 containerd[1549]: time="2025-12-12T18:39:32.730224617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:39:32.732609 containerd[1549]: time="2025-12-12T18:39:32.730282057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:39:32.732669 kubelet[2716]: E1212 18:39:32.731636 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:39:32.732669 kubelet[2716]: E1212 18:39:32.731700 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:39:32.732743 kubelet[2716]: E1212 18:39:32.731861 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a642342dbf7c4c169bfaf0e4fa62e16b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:32.733929 containerd[1549]: time="2025-12-12T18:39:32.733822132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:39:32.861842 containerd[1549]: time="2025-12-12T18:39:32.861778667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:32.863606 containerd[1549]: time="2025-12-12T18:39:32.863461559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:39:32.863606 containerd[1549]: time="2025-12-12T18:39:32.863577020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:39:32.863888 kubelet[2716]: E1212 18:39:32.863823 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:39:32.864236 kubelet[2716]: E1212 18:39:32.864018 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:39:32.864449 kubelet[2716]: E1212 18:39:32.864408 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:32.865757 kubelet[2716]: E1212 18:39:32.865719 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:39:33.413003 kubelet[2716]: I1212 18:39:33.412955 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6c3cf7-869a-4f7d-9246-68e40e088fa1" path="/var/lib/kubelet/pods/ae6c3cf7-869a-4f7d-9246-68e40e088fa1/volumes" Dec 12 18:39:33.585556 kubelet[2716]: E1212 18:39:33.585400 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:33.590684 kubelet[2716]: E1212 18:39:33.590191 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:39:34.430341 systemd-networkd[1451]: cali7fc58f17db3: Gained IPv6LL Dec 12 18:39:34.589781 kubelet[2716]: E1212 18:39:34.589720 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:39:38.411154 kubelet[2716]: E1212 18:39:38.411061 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:38.412669 containerd[1549]: time="2025-12-12T18:39:38.411371196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht9b9,Uid:3e77788b-998e-4510-9ab0-47ab12a2af9d,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:38.412669 containerd[1549]: time="2025-12-12T18:39:38.411976159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z92n5,Uid:ecab0b73-4d36-4385-baa9-f461d5459b0d,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:38.573231 systemd-networkd[1451]: calie02c967c2e2: Link UP Dec 12 18:39:38.576166 systemd-networkd[1451]: calie02c967c2e2: Gained carrier Dec 12 18:39:38.593630 containerd[1549]: 2025-12-12 18:39:38.451 [INFO][4108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:38.593630 containerd[1549]: 2025-12-12 18:39:38.470 [INFO][4108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0 coredns-668d6bf9bc- kube-system ecab0b73-4d36-4385-baa9-f461d5459b0d 811 0 2025-12-12 18:39:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-133-204 coredns-668d6bf9bc-z92n5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie02c967c2e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-" Dec 12 18:39:38.593630 containerd[1549]: 2025-12-12 18:39:38.470 [INFO][4108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.593630 containerd[1549]: 2025-12-12 18:39:38.513 [INFO][4135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" HandleID="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.514 [INFO][4135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" HandleID="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-133-204", "pod":"coredns-668d6bf9bc-z92n5", "timestamp":"2025-12-12 18:39:38.51325878 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.514 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.514 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.514 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.522 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" host="172-237-133-204" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.532 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.539 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.541 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.544 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.593818 containerd[1549]: 2025-12-12 18:39:38.544 [INFO][4135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" host="172-237-133-204" Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.545 [INFO][4135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51 Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.550 [INFO][4135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" host="172-237-133-204" Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.66/26] block=192.168.22.64/26 handle="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" host="172-237-133-204" Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.66/26] handle="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" host="172-237-133-204" Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:38.594518 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.66/26] IPv6=[] ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" HandleID="k8s-pod-network.020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.564 [INFO][4108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecab0b73-4d36-4385-baa9-f461d5459b0d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"coredns-668d6bf9bc-z92n5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie02c967c2e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.564 [INFO][4108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.66/32] ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.564 [INFO][4108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie02c967c2e2 ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.575 [INFO][4108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.578 [INFO][4108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ecab0b73-4d36-4385-baa9-f461d5459b0d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51", Pod:"coredns-668d6bf9bc-z92n5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie02c967c2e2", MAC:"3a:bb:74:f9:69:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:38.594636 containerd[1549]: 2025-12-12 18:39:38.590 [INFO][4108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" Namespace="kube-system" Pod="coredns-668d6bf9bc-z92n5" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--z92n5-eth0" Dec 12 18:39:38.664944 containerd[1549]: time="2025-12-12T18:39:38.664477998Z" level=info msg="connecting to shim 020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51" address="unix:///run/containerd/s/313350020583f19c46ad329b9ca605a622d3f48f399f79388f634f281e9fff1e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:38.720084 systemd-networkd[1451]: cali149ea08b5e3: Link UP Dec 12 18:39:38.720291 systemd-networkd[1451]: cali149ea08b5e3: Gained carrier Dec 12 18:39:38.734116 systemd[1]: Started cri-containerd-020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51.scope - libcontainer container 020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51. Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.451 [INFO][4107] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.464 [INFO][4107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-csi--node--driver--ht9b9-eth0 csi-node-driver- calico-system 3e77788b-998e-4510-9ab0-47ab12a2af9d 706 0 2025-12-12 18:39:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-133-204 csi-node-driver-ht9b9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali149ea08b5e3 [] [] }} ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.464 [INFO][4107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.529 [INFO][4132] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" HandleID="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Workload="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.529 [INFO][4132] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" HandleID="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Workload="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf010), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-204", "pod":"csi-node-driver-ht9b9", "timestamp":"2025-12-12 18:39:38.529453397 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.529 [INFO][4132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.555 [INFO][4132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.630 [INFO][4132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.648 [INFO][4132] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.663 [INFO][4132] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.668 [INFO][4132] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.672 [INFO][4132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.672 [INFO][4132] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.678 [INFO][4132] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.684 [INFO][4132] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.700 [INFO][4132] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.67/26] block=192.168.22.64/26 handle="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.700 [INFO][4132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.67/26] handle="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" host="172-237-133-204" Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.701 [INFO][4132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:38.745677 containerd[1549]: 2025-12-12 18:39:38.701 [INFO][4132] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.67/26] IPv6=[] ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" HandleID="k8s-pod-network.680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Workload="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.709 [INFO][4107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-csi--node--driver--ht9b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e77788b-998e-4510-9ab0-47ab12a2af9d", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"csi-node-driver-ht9b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali149ea08b5e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.711 [INFO][4107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.67/32] ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.711 [INFO][4107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali149ea08b5e3 ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.718 [INFO][4107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.718 [INFO][4107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-csi--node--driver--ht9b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e77788b-998e-4510-9ab0-47ab12a2af9d", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa", Pod:"csi-node-driver-ht9b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali149ea08b5e3", MAC:"b2:fe:7a:83:3d:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:38.747283 containerd[1549]: 2025-12-12 18:39:38.741 [INFO][4107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" Namespace="calico-system" Pod="csi-node-driver-ht9b9" WorkloadEndpoint="172--237--133--204-k8s-csi--node--driver--ht9b9-eth0" Dec 12 18:39:38.774744 containerd[1549]: time="2025-12-12T18:39:38.774706652Z" level=info msg="connecting to shim 680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa" address="unix:///run/containerd/s/95d254afcb5e35c9bfc16e4c663c56ee397e1de0b485069a40938ae16acce864" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:38.808208 systemd[1]: Started cri-containerd-680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa.scope - libcontainer container 680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa. Dec 12 18:39:38.830095 containerd[1549]: time="2025-12-12T18:39:38.830040245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z92n5,Uid:ecab0b73-4d36-4385-baa9-f461d5459b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51\"" Dec 12 18:39:38.831133 kubelet[2716]: E1212 18:39:38.831104 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:38.833528 containerd[1549]: time="2025-12-12T18:39:38.833467711Z" level=info msg="CreateContainer within sandbox \"020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:39:38.843056 containerd[1549]: time="2025-12-12T18:39:38.843027216Z" level=info msg="Container 477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:38.848894 containerd[1549]: time="2025-12-12T18:39:38.848866934Z" level=info msg="CreateContainer within sandbox \"020b30619a2060e65ca37196dd4db44df17f2790f678faba87f3e65dddf48a51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f\"" Dec 12 18:39:38.849677 containerd[1549]: time="2025-12-12T18:39:38.849646248Z" level=info msg="StartContainer for \"477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f\"" Dec 12 18:39:38.851756 containerd[1549]: time="2025-12-12T18:39:38.851720168Z" level=info msg="connecting to shim 477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f" address="unix:///run/containerd/s/313350020583f19c46ad329b9ca605a622d3f48f399f79388f634f281e9fff1e" protocol=ttrpc version=3 Dec 12 18:39:38.860740 containerd[1549]: time="2025-12-12T18:39:38.860661000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ht9b9,Uid:3e77788b-998e-4510-9ab0-47ab12a2af9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"680da8647348f20570c93b077942259fb1f89791a670763e27b309e4fb2df2aa\"" Dec 12 18:39:38.865612 containerd[1549]: time="2025-12-12T18:39:38.865567983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:39:38.881069 systemd[1]: Started cri-containerd-477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f.scope - libcontainer container 477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f. Dec 12 18:39:38.919219 containerd[1549]: time="2025-12-12T18:39:38.918881887Z" level=info msg="StartContainer for \"477000ed39aa81ba542d8e2c75fcf60d654bc6e42341546ca33fa3b40803006f\" returns successfully" Dec 12 18:39:39.007460 containerd[1549]: time="2025-12-12T18:39:39.007391335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:39.008559 containerd[1549]: time="2025-12-12T18:39:39.008502890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:39:39.008728 containerd[1549]: time="2025-12-12T18:39:39.008650031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:39:39.008806 kubelet[2716]: E1212 18:39:39.008698 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:39:39.008806 kubelet[2716]: E1212 18:39:39.008743 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:39:39.008949 kubelet[2716]: E1212 18:39:39.008854 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:39.011605 containerd[1549]: time="2025-12-12T18:39:39.011535264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:39:39.165544 containerd[1549]: time="2025-12-12T18:39:39.165501509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:39.166450 containerd[1549]: time="2025-12-12T18:39:39.166417993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:39:39.166633 containerd[1549]: time="2025-12-12T18:39:39.166493444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:39:39.166732 kubelet[2716]: E1212 18:39:39.166690 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:39:39.166779 kubelet[2716]: E1212 18:39:39.166765 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:39:39.170527 kubelet[2716]: E1212 18:39:39.166899 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:39.170527 kubelet[2716]: E1212 18:39:39.170312 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:39.411110 containerd[1549]: time="2025-12-12T18:39:39.411035153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n6lhs,Uid:80609a1f-fd0a-4b4b-a327-8d66d4e6cb54,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:39.510005 systemd-networkd[1451]: cali5f3f945868f: Link UP Dec 12 18:39:39.510232 systemd-networkd[1451]: cali5f3f945868f: Gained carrier Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.440 [INFO][4306] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.450 [INFO][4306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0 goldmane-666569f655- calico-system 80609a1f-fd0a-4b4b-a327-8d66d4e6cb54 813 0 2025-12-12 18:39:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-133-204 goldmane-666569f655-n6lhs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5f3f945868f [] [] }} ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.450 [INFO][4306] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.477 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" HandleID="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Workload="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.477 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" HandleID="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Workload="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-204", "pod":"goldmane-666569f655-n6lhs", "timestamp":"2025-12-12 18:39:39.477287128 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.477 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.477 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.477 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.483 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.487 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.490 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.491 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.494 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.494 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.495 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.499 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.503 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.68/26] block=192.168.22.64/26 handle="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.503 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.68/26] handle="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" host="172-237-133-204" Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.503 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:39.524881 containerd[1549]: 2025-12-12 18:39:39.503 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.68/26] IPv6=[] ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" HandleID="k8s-pod-network.4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Workload="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.505 [INFO][4306] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"goldmane-666569f655-n6lhs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f3f945868f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.506 [INFO][4306] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.68/32] ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.506 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f3f945868f ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.509 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.511 [INFO][4306] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"80609a1f-fd0a-4b4b-a327-8d66d4e6cb54", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c", Pod:"goldmane-666569f655-n6lhs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f3f945868f", MAC:"da:4e:f5:bb:8e:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:39.526278 containerd[1549]: 2025-12-12 18:39:39.521 [INFO][4306] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" Namespace="calico-system" Pod="goldmane-666569f655-n6lhs" WorkloadEndpoint="172--237--133--204-k8s-goldmane--666569f655--n6lhs-eth0" Dec 12 18:39:39.544016 containerd[1549]: time="2025-12-12T18:39:39.543956065Z" level=info msg="connecting to shim 4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c" address="unix:///run/containerd/s/198d1ddbea7623b4abcae215f9b1635554844080c792856309fc6309e88f8791" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:39.583036 systemd[1]: Started cri-containerd-4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c.scope - libcontainer container 4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c. Dec 12 18:39:39.604942 kubelet[2716]: E1212 18:39:39.604593 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:39.606809 kubelet[2716]: E1212 18:39:39.606709 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:39.632332 kubelet[2716]: I1212 18:39:39.632269 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z92n5" podStartSLOduration=32.632254248 podStartE2EDuration="32.632254248s" podCreationTimestamp="2025-12-12 18:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:39.632054637 +0000 UTC m=+38.348158240" watchObservedRunningTime="2025-12-12 18:39:39.632254248 +0000 UTC m=+38.348357831" Dec 12 18:39:39.670080 containerd[1549]: time="2025-12-12T18:39:39.669898556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n6lhs,Uid:80609a1f-fd0a-4b4b-a327-8d66d4e6cb54,Namespace:calico-system,Attempt:0,} returns sandbox id \"4048cd9fda5b365e3020a6aa56249e8b7af4cc8e0094d370959492f1e278106c\"" Dec 12 18:39:39.672819 containerd[1549]: time="2025-12-12T18:39:39.672492377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:39:39.799738 containerd[1549]: time="2025-12-12T18:39:39.798903210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:39.800502 containerd[1549]: time="2025-12-12T18:39:39.800343946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:39:39.800843 containerd[1549]: time="2025-12-12T18:39:39.800412577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:39.801118 kubelet[2716]: E1212 18:39:39.801056 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:39:39.801118 kubelet[2716]: E1212 18:39:39.801108 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:39:39.801273 kubelet[2716]: E1212 18:39:39.801230 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dns7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:39.802576 kubelet[2716]: E1212 18:39:39.802534 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:39:40.411972 containerd[1549]: time="2025-12-12T18:39:40.411785865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-cwzws,Uid:d8ce7370-ee37-4b30-a101-cbc03d0825dd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:40.513127 systemd-networkd[1451]: calie02c967c2e2: Gained IPv6LL Dec 12 18:39:40.540423 systemd-networkd[1451]: cali5727c73cf15: Link UP Dec 12 18:39:40.541327 systemd-networkd[1451]: cali5727c73cf15: Gained carrier Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.445 [INFO][4400] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.465 [INFO][4400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0 calico-apiserver-6d5b588865- calico-apiserver d8ce7370-ee37-4b30-a101-cbc03d0825dd 809 0 2025-12-12 18:39:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5b588865 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-133-204 calico-apiserver-6d5b588865-cwzws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5727c73cf15 [] [] }} ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.465 [INFO][4400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.492 [INFO][4412] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" HandleID="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.492 [INFO][4412] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" HandleID="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-133-204", "pod":"calico-apiserver-6d5b588865-cwzws", "timestamp":"2025-12-12 18:39:40.492430122 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.492 [INFO][4412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.492 [INFO][4412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.492 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.501 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.507 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.514 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.516 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.519 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.519 [INFO][4412] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.521 [INFO][4412] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.526 [INFO][4412] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.530 [INFO][4412] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.69/26] block=192.168.22.64/26 handle="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.531 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.69/26] handle="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" host="172-237-133-204" Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.531 [INFO][4412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:40.560602 containerd[1549]: 2025-12-12 18:39:40.531 [INFO][4412] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.69/26] IPv6=[] ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" HandleID="k8s-pod-network.8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.535 [INFO][4400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0", GenerateName:"calico-apiserver-6d5b588865-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ce7370-ee37-4b30-a101-cbc03d0825dd", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5b588865", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"calico-apiserver-6d5b588865-cwzws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5727c73cf15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.535 [INFO][4400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.69/32] ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.535 [INFO][4400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5727c73cf15 ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.541 [INFO][4400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.542 [INFO][4400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0", GenerateName:"calico-apiserver-6d5b588865-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ce7370-ee37-4b30-a101-cbc03d0825dd", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5b588865", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e", Pod:"calico-apiserver-6d5b588865-cwzws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5727c73cf15", MAC:"46:7e:4e:d2:a3:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:40.562789 containerd[1549]: 2025-12-12 18:39:40.556 [INFO][4400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-cwzws" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--cwzws-eth0" Dec 12 18:39:40.589192 containerd[1549]: time="2025-12-12T18:39:40.589141436Z" level=info msg="connecting to shim 8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e" address="unix:///run/containerd/s/fddfbabab7c2aff13f2983dad96b652b2251568e245ae785715174f9b7b76618" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:40.609699 kubelet[2716]: E1212 18:39:40.609656 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:40.613437 kubelet[2716]: E1212 18:39:40.612373 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:39:40.613437 kubelet[2716]: E1212 18:39:40.612461 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:40.633468 systemd[1]: Started cri-containerd-8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e.scope - libcontainer container 8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e. Dec 12 18:39:40.702174 systemd-networkd[1451]: cali149ea08b5e3: Gained IPv6LL Dec 12 18:39:40.715960 containerd[1549]: time="2025-12-12T18:39:40.715880835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-cwzws,Uid:d8ce7370-ee37-4b30-a101-cbc03d0825dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8d30f00c4be97ff78b4fe5427c4444aea3663f888c9cc68597fe0d06e7ee2a4e\"" Dec 12 18:39:40.718202 containerd[1549]: time="2025-12-12T18:39:40.717818193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:39:40.846772 containerd[1549]: time="2025-12-12T18:39:40.846699031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:40.848104 containerd[1549]: time="2025-12-12T18:39:40.847943746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:39:40.848104 containerd[1549]: time="2025-12-12T18:39:40.848071287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:40.848678 kubelet[2716]: E1212 18:39:40.848569 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:40.848678 kubelet[2716]: E1212 18:39:40.848651 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:40.849511 kubelet[2716]: E1212 18:39:40.849461 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59jwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:40.851300 kubelet[2716]: E1212 18:39:40.851249 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:39:40.958082 systemd-networkd[1451]: cali5f3f945868f: Gained IPv6LL Dec 12 18:39:41.413501 containerd[1549]: time="2025-12-12T18:39:41.413414709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854cf6c79-2dk2d,Uid:b00bfe6d-78a3-4353-8888-bffa785c4bed,Namespace:calico-system,Attempt:0,}" Dec 12 18:39:41.553765 systemd-networkd[1451]: cali29571290186: Link UP Dec 12 18:39:41.555647 systemd-networkd[1451]: cali29571290186: Gained carrier Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.448 [INFO][4496] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.458 [INFO][4496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0 calico-kube-controllers-7854cf6c79- calico-system b00bfe6d-78a3-4353-8888-bffa785c4bed 812 0 2025-12-12 18:39:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7854cf6c79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-133-204 calico-kube-controllers-7854cf6c79-2dk2d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali29571290186 [] [] }} ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.459 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.487 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" HandleID="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Workload="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.488 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" HandleID="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Workload="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-133-204", "pod":"calico-kube-controllers-7854cf6c79-2dk2d", "timestamp":"2025-12-12 18:39:41.487882831 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.488 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.488 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.488 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.498 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.503 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.529 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.531 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.534 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.534 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.536 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190 Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.540 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.546 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.70/26] block=192.168.22.64/26 handle="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.546 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.70/26] handle="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" host="172-237-133-204" Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.546 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:41.582799 containerd[1549]: 2025-12-12 18:39:41.547 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.70/26] IPv6=[] ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" HandleID="k8s-pod-network.ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Workload="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.550 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0", GenerateName:"calico-kube-controllers-7854cf6c79-", Namespace:"calico-system", SelfLink:"", UID:"b00bfe6d-78a3-4353-8888-bffa785c4bed", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854cf6c79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"calico-kube-controllers-7854cf6c79-2dk2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29571290186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.550 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.70/32] ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.550 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29571290186 ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.558 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.564 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0", GenerateName:"calico-kube-controllers-7854cf6c79-", Namespace:"calico-system", SelfLink:"", UID:"b00bfe6d-78a3-4353-8888-bffa785c4bed", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7854cf6c79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190", Pod:"calico-kube-controllers-7854cf6c79-2dk2d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29571290186", MAC:"3e:39:18:9e:a5:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:41.584265 containerd[1549]: 2025-12-12 18:39:41.575 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" Namespace="calico-system" Pod="calico-kube-controllers-7854cf6c79-2dk2d" WorkloadEndpoint="172--237--133--204-k8s-calico--kube--controllers--7854cf6c79--2dk2d-eth0" Dec 12 18:39:41.611484 containerd[1549]: time="2025-12-12T18:39:41.611402974Z" level=info msg="connecting to shim ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190" address="unix:///run/containerd/s/7faac33af4dabe363c9e54f98b69f65b1ed8f46ed6279530c12abebffd13755a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:41.615963 kubelet[2716]: E1212 18:39:41.615331 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:41.622687 kubelet[2716]: E1212 18:39:41.622646 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:39:41.622786 kubelet[2716]: E1212 18:39:41.622589 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:39:41.678067 systemd[1]: Started cri-containerd-ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190.scope - libcontainer container ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190. Dec 12 18:39:41.746549 containerd[1549]: time="2025-12-12T18:39:41.746501603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7854cf6c79-2dk2d,Uid:b00bfe6d-78a3-4353-8888-bffa785c4bed,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee0667c8717f68be1271f345a8592f27f5a2f97a9133a1250b1626fd0bd0a190\"" Dec 12 18:39:41.750049 containerd[1549]: time="2025-12-12T18:39:41.750019087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:39:41.884355 containerd[1549]: time="2025-12-12T18:39:41.884295293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:41.885262 containerd[1549]: time="2025-12-12T18:39:41.885216196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:39:41.885324 containerd[1549]: time="2025-12-12T18:39:41.885284337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:39:41.885471 kubelet[2716]: E1212 18:39:41.885430 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:39:41.885544 kubelet[2716]: E1212 18:39:41.885480 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:39:41.885655 kubelet[2716]: E1212 18:39:41.885593 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m7fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:41.887942 kubelet[2716]: E1212 18:39:41.887105 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:39:42.302233 systemd-networkd[1451]: cali5727c73cf15: Gained IPv6LL Dec 12 18:39:42.411403 kubelet[2716]: E1212 18:39:42.411355 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:42.412615 containerd[1549]: time="2025-12-12T18:39:42.411908487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-bth42,Uid:ca927433-737e-4f41-bcbf-8431c7f3c6dc,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:39:42.413519 containerd[1549]: time="2025-12-12T18:39:42.413486853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69hbg,Uid:0b392157-ffe0-4a43-aecf-43a2cfd0c8c5,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:42.580415 systemd-networkd[1451]: calib4c8bbc148d: Link UP Dec 12 18:39:42.584221 systemd-networkd[1451]: calib4c8bbc148d: Gained carrier Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.456 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.471 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0 coredns-668d6bf9bc- kube-system 0b392157-ffe0-4a43-aecf-43a2cfd0c8c5 808 0 2025-12-12 18:39:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-133-204 coredns-668d6bf9bc-69hbg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4c8bbc148d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.471 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.512 [INFO][4622] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" HandleID="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.514 [INFO][4622] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" HandleID="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-133-204", "pod":"coredns-668d6bf9bc-69hbg", "timestamp":"2025-12-12 18:39:42.512795498 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.514 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.514 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.514 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.521 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.526 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.547 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.549 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.550 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.551 [INFO][4622] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.554 [INFO][4622] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92 Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.564 [INFO][4622] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.569 [INFO][4622] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.71/26] block=192.168.22.64/26 handle="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.569 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.71/26] handle="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" host="172-237-133-204" Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.570 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:42.603577 containerd[1549]: 2025-12-12 18:39:42.570 [INFO][4622] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.71/26] IPv6=[] ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" HandleID="k8s-pod-network.c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Workload="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.573 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b392157-ffe0-4a43-aecf-43a2cfd0c8c5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"coredns-668d6bf9bc-69hbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c8bbc148d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.574 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.71/32] ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.574 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4c8bbc148d ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.589 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.590 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0b392157-ffe0-4a43-aecf-43a2cfd0c8c5", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92", Pod:"coredns-668d6bf9bc-69hbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4c8bbc148d", MAC:"ce:de:ab:38:a8:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:42.605427 containerd[1549]: 2025-12-12 18:39:42.599 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" Namespace="kube-system" Pod="coredns-668d6bf9bc-69hbg" WorkloadEndpoint="172--237--133--204-k8s-coredns--668d6bf9bc--69hbg-eth0" Dec 12 18:39:42.629785 kubelet[2716]: E1212 18:39:42.629503 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:39:42.629785 kubelet[2716]: E1212 18:39:42.629506 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:39:42.641962 containerd[1549]: time="2025-12-12T18:39:42.640632056Z" level=info msg="connecting to shim c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92" address="unix:///run/containerd/s/3edc1d02c15878d18a01f10eb3d8f2b798311be9996781eff832f0d32ab3b7bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:42.686109 systemd-networkd[1451]: cali29571290186: Gained IPv6LL Dec 12 18:39:42.688507 systemd[1]: Started cri-containerd-c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92.scope - libcontainer container c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92. Dec 12 18:39:42.719423 systemd-networkd[1451]: calie70b7c59e0a: Link UP Dec 12 18:39:42.722208 systemd-networkd[1451]: calie70b7c59e0a: Gained carrier Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.452 [INFO][4592] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.465 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0 calico-apiserver-6d5b588865- calico-apiserver ca927433-737e-4f41-bcbf-8431c7f3c6dc 810 0 2025-12-12 18:39:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5b588865 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-133-204 calico-apiserver-6d5b588865-bth42 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie70b7c59e0a [] [] }} ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.465 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.514 [INFO][4620] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" HandleID="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.515 [INFO][4620] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" HandleID="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-133-204", "pod":"calico-apiserver-6d5b588865-bth42", "timestamp":"2025-12-12 18:39:42.514743495 +0000 UTC"}, Hostname:"172-237-133-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.517 [INFO][4620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.569 [INFO][4620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.570 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-133-204' Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.627 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.651 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.669 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.676 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.681 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.64/26 host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.681 [INFO][4620] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.64/26 handle="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.684 [INFO][4620] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.693 [INFO][4620] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.64/26 handle="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.701 [INFO][4620] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.72/26] block=192.168.22.64/26 handle="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.701 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.72/26] handle="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" host="172-237-133-204" Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.701 [INFO][4620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:39:42.741607 containerd[1549]: 2025-12-12 18:39:42.701 [INFO][4620] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.72/26] IPv6=[] ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" HandleID="k8s-pod-network.6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Workload="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.707 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0", GenerateName:"calico-apiserver-6d5b588865-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca927433-737e-4f41-bcbf-8431c7f3c6dc", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5b588865", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"", Pod:"calico-apiserver-6d5b588865-bth42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie70b7c59e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.709 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.72/32] ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.709 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie70b7c59e0a ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.722 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.723 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0", GenerateName:"calico-apiserver-6d5b588865-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca927433-737e-4f41-bcbf-8431c7f3c6dc", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 39, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5b588865", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-133-204", ContainerID:"6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c", Pod:"calico-apiserver-6d5b588865-bth42", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie70b7c59e0a", MAC:"56:35:f1:23:d6:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:39:42.742403 containerd[1549]: 2025-12-12 18:39:42.735 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" Namespace="calico-apiserver" Pod="calico-apiserver-6d5b588865-bth42" WorkloadEndpoint="172--237--133--204-k8s-calico--apiserver--6d5b588865--bth42-eth0" Dec 12 18:39:42.773956 containerd[1549]: time="2025-12-12T18:39:42.773760945Z" level=info msg="connecting to shim 6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c" address="unix:///run/containerd/s/4e8cd65d87ef405c9bf198981e08bb3c88aac071443f84d26bc6f3b1d3f2b90c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:42.790955 containerd[1549]: time="2025-12-12T18:39:42.790879068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-69hbg,Uid:0b392157-ffe0-4a43-aecf-43a2cfd0c8c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92\"" Dec 12 18:39:42.794093 kubelet[2716]: E1212 18:39:42.794069 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:42.799568 containerd[1549]: time="2025-12-12T18:39:42.799547120Z" level=info msg="CreateContainer within sandbox \"c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:39:42.811241 containerd[1549]: time="2025-12-12T18:39:42.811205042Z" level=info msg="Container e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:42.821531 systemd[1]: Started cri-containerd-6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c.scope - libcontainer container 6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c. Dec 12 18:39:42.827996 containerd[1549]: time="2025-12-12T18:39:42.827894304Z" level=info msg="CreateContainer within sandbox \"c6a143eb7f4696e0afc5ee67f227079a7e56f9178e02f7cfb774721c55563d92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972\"" Dec 12 18:39:42.835302 containerd[1549]: time="2025-12-12T18:39:42.832516471Z" level=info msg="StartContainer for \"e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972\"" Dec 12 18:39:42.835748 containerd[1549]: time="2025-12-12T18:39:42.835712202Z" level=info msg="connecting to shim e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972" address="unix:///run/containerd/s/3edc1d02c15878d18a01f10eb3d8f2b798311be9996781eff832f0d32ab3b7bf" protocol=ttrpc version=3 Dec 12 18:39:42.869082 systemd[1]: Started cri-containerd-e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972.scope - libcontainer container e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972. Dec 12 18:39:42.933842 containerd[1549]: time="2025-12-12T18:39:42.933785282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5b588865-bth42,Uid:ca927433-737e-4f41-bcbf-8431c7f3c6dc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6bcf0f26917cba9149987944123a9727cfabfa5fb412fbe46c6ae0d1bc77070c\"" Dec 12 18:39:42.934659 containerd[1549]: time="2025-12-12T18:39:42.934508065Z" level=info msg="StartContainer for \"e943864af6b87a23e75bcaffdb35eb1f67fdc3f222ec06cabdae078845913972\" returns successfully" Dec 12 18:39:42.938875 containerd[1549]: time="2025-12-12T18:39:42.938824571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:39:43.073881 containerd[1549]: time="2025-12-12T18:39:43.073678429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:43.076012 containerd[1549]: time="2025-12-12T18:39:43.074553192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:39:43.076612 containerd[1549]: time="2025-12-12T18:39:43.076310338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:43.076797 kubelet[2716]: E1212 18:39:43.076706 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:43.077376 kubelet[2716]: E1212 18:39:43.077336 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:43.077615 kubelet[2716]: E1212 18:39:43.077557 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:43.078933 kubelet[2716]: E1212 18:39:43.078876 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:39:43.631768 kubelet[2716]: E1212 18:39:43.631719 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:39:43.638626 kubelet[2716]: E1212 18:39:43.637635 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:39:43.638952 kubelet[2716]: E1212 18:39:43.638853 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:43.679698 kubelet[2716]: I1212 18:39:43.679014 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-69hbg" podStartSLOduration=36.67899675 podStartE2EDuration="36.67899675s" podCreationTimestamp="2025-12-12 18:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:43.657635727 +0000 UTC m=+42.373739310" watchObservedRunningTime="2025-12-12 18:39:43.67899675 +0000 UTC m=+42.395100333" Dec 12 18:39:44.058875 kubelet[2716]: I1212 18:39:44.058774 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:39:44.059804 kubelet[2716]: E1212 18:39:44.059756 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:44.225743 systemd-networkd[1451]: calie70b7c59e0a: Gained IPv6LL Dec 12 18:39:44.350083 systemd-networkd[1451]: calib4c8bbc148d: Gained IPv6LL Dec 12 18:39:44.638411 kubelet[2716]: E1212 18:39:44.638296 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:44.640205 kubelet[2716]: E1212 18:39:44.640135 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:44.641441 kubelet[2716]: E1212 18:39:44.641406 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:39:44.772375 systemd-networkd[1451]: vxlan.calico: Link UP Dec 12 18:39:44.773169 systemd-networkd[1451]: vxlan.calico: Gained carrier Dec 12 18:39:45.640279 kubelet[2716]: E1212 18:39:45.640230 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:39:45.950125 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Dec 12 18:39:49.412968 containerd[1549]: time="2025-12-12T18:39:49.412893060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:39:49.593066 containerd[1549]: time="2025-12-12T18:39:49.593010101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:49.594893 containerd[1549]: time="2025-12-12T18:39:49.594675885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:39:49.594893 containerd[1549]: time="2025-12-12T18:39:49.594718025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:39:49.595103 kubelet[2716]: E1212 18:39:49.595014 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:39:49.595103 kubelet[2716]: E1212 18:39:49.595092 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:39:49.595666 kubelet[2716]: E1212 18:39:49.595353 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a642342dbf7c4c169bfaf0e4fa62e16b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:49.598528 containerd[1549]: time="2025-12-12T18:39:49.598082503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:39:49.734649 containerd[1549]: time="2025-12-12T18:39:49.734511562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:49.735664 containerd[1549]: time="2025-12-12T18:39:49.735573354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:39:49.735664 containerd[1549]: time="2025-12-12T18:39:49.735614884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:39:49.735884 kubelet[2716]: E1212 18:39:49.735837 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:39:49.735981 kubelet[2716]: E1212 18:39:49.735898 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:39:49.736506 kubelet[2716]: E1212 18:39:49.736030 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:49.737789 kubelet[2716]: E1212 18:39:49.737745 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:39:51.413077 containerd[1549]: time="2025-12-12T18:39:51.412182428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:39:51.545004 containerd[1549]: time="2025-12-12T18:39:51.544960810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:51.546148 containerd[1549]: time="2025-12-12T18:39:51.546116503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:39:51.546242 containerd[1549]: time="2025-12-12T18:39:51.546165443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:39:51.546456 kubelet[2716]: E1212 18:39:51.546415 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:39:51.546822 kubelet[2716]: E1212 18:39:51.546461 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:39:51.546822 kubelet[2716]: E1212 18:39:51.546558 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:51.548772 containerd[1549]: time="2025-12-12T18:39:51.548748148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:39:51.683707 containerd[1549]: time="2025-12-12T18:39:51.683572315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:51.684749 containerd[1549]: time="2025-12-12T18:39:51.684669097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:39:51.684910 containerd[1549]: time="2025-12-12T18:39:51.684710917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:39:51.685137 kubelet[2716]: E1212 18:39:51.685083 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:39:51.685202 kubelet[2716]: E1212 18:39:51.685157 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:39:51.685373 kubelet[2716]: E1212 18:39:51.685329 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:51.687122 kubelet[2716]: E1212 18:39:51.687055 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:39:52.412632 containerd[1549]: time="2025-12-12T18:39:52.412575388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:39:52.551882 containerd[1549]: time="2025-12-12T18:39:52.551806536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:52.553575 containerd[1549]: time="2025-12-12T18:39:52.553495949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:39:52.554525 containerd[1549]: time="2025-12-12T18:39:52.553547180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:52.554575 kubelet[2716]: E1212 18:39:52.553794 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:39:52.554575 kubelet[2716]: E1212 18:39:52.553841 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:39:52.554575 kubelet[2716]: E1212 18:39:52.554039 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dns7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:52.557755 kubelet[2716]: E1212 18:39:52.557693 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:39:55.413462 containerd[1549]: time="2025-12-12T18:39:55.413156438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:39:55.557078 containerd[1549]: time="2025-12-12T18:39:55.557022536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:55.558602 containerd[1549]: time="2025-12-12T18:39:55.558513358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:39:55.558678 containerd[1549]: time="2025-12-12T18:39:55.558626229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:55.558789 kubelet[2716]: E1212 18:39:55.558752 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:55.559229 kubelet[2716]: E1212 18:39:55.558798 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:55.559229 kubelet[2716]: E1212 18:39:55.558906 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59jwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:55.560743 kubelet[2716]: E1212 18:39:55.560714 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:39:57.413476 containerd[1549]: time="2025-12-12T18:39:57.413050950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:39:57.563454 containerd[1549]: time="2025-12-12T18:39:57.563381310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:57.564671 containerd[1549]: time="2025-12-12T18:39:57.564545771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:39:57.564671 containerd[1549]: time="2025-12-12T18:39:57.564618421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:39:57.565067 kubelet[2716]: E1212 18:39:57.564955 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:39:57.565067 kubelet[2716]: E1212 18:39:57.565033 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:39:57.566166 kubelet[2716]: E1212 18:39:57.565493 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m7fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:57.566261 containerd[1549]: time="2025-12-12T18:39:57.565417013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:39:57.567632 kubelet[2716]: E1212 18:39:57.567580 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:39:57.703623 containerd[1549]: time="2025-12-12T18:39:57.703514575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:39:57.704520 containerd[1549]: time="2025-12-12T18:39:57.704481106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:39:57.704600 containerd[1549]: time="2025-12-12T18:39:57.704547416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:39:57.704729 kubelet[2716]: E1212 18:39:57.704654 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:57.704729 kubelet[2716]: E1212 18:39:57.704714 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:39:57.704889 kubelet[2716]: E1212 18:39:57.704825 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:39:57.706294 kubelet[2716]: E1212 18:39:57.706258 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:40:03.412769 kubelet[2716]: E1212 18:40:03.412601 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:40:03.672562 kubelet[2716]: E1212 18:40:03.672368 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:04.412566 kubelet[2716]: E1212 18:40:04.412517 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:40:04.414220 kubelet[2716]: E1212 18:40:04.414054 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:40:08.413055 kubelet[2716]: E1212 18:40:08.412996 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:40:09.413199 kubelet[2716]: E1212 18:40:09.412525 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:40:12.414117 kubelet[2716]: E1212 18:40:12.413430 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:40:16.412869 containerd[1549]: time="2025-12-12T18:40:16.412597288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:40:16.547575 containerd[1549]: time="2025-12-12T18:40:16.547536422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:16.548768 containerd[1549]: time="2025-12-12T18:40:16.548724421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:40:16.548832 containerd[1549]: time="2025-12-12T18:40:16.548742502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:40:16.549671 kubelet[2716]: E1212 18:40:16.549622 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:16.550036 kubelet[2716]: E1212 18:40:16.549685 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:16.550036 kubelet[2716]: E1212 18:40:16.549979 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:16.550603 containerd[1549]: time="2025-12-12T18:40:16.550530980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:40:16.708075 containerd[1549]: time="2025-12-12T18:40:16.707742215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:16.710013 containerd[1549]: time="2025-12-12T18:40:16.709142491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:40:16.710013 containerd[1549]: time="2025-12-12T18:40:16.709176222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:40:16.710300 kubelet[2716]: E1212 18:40:16.710232 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:16.710362 kubelet[2716]: E1212 18:40:16.710331 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:40:16.710991 containerd[1549]: time="2025-12-12T18:40:16.710760804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:40:16.711499 kubelet[2716]: E1212 18:40:16.711455 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a642342dbf7c4c169bfaf0e4fa62e16b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:16.839453 containerd[1549]: time="2025-12-12T18:40:16.839404702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:16.840383 containerd[1549]: time="2025-12-12T18:40:16.840344103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:40:16.840383 containerd[1549]: time="2025-12-12T18:40:16.840407715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:40:16.840747 kubelet[2716]: E1212 18:40:16.840704 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:16.840808 kubelet[2716]: E1212 18:40:16.840792 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:16.841098 kubelet[2716]: E1212 18:40:16.841043 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:16.841579 containerd[1549]: time="2025-12-12T18:40:16.841531152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:40:16.843470 kubelet[2716]: E1212 18:40:16.843388 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:40:16.974645 containerd[1549]: time="2025-12-12T18:40:16.974525422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:16.976333 containerd[1549]: time="2025-12-12T18:40:16.976274400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:40:16.976333 containerd[1549]: time="2025-12-12T18:40:16.976298660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:16.976499 kubelet[2716]: E1212 18:40:16.976472 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:16.976570 kubelet[2716]: E1212 18:40:16.976515 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:40:16.976742 kubelet[2716]: E1212 18:40:16.976653 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:16.978200 kubelet[2716]: E1212 18:40:16.978158 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:40:17.412309 kubelet[2716]: E1212 18:40:17.411996 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:19.416497 kubelet[2716]: E1212 18:40:19.414395 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:19.417599 containerd[1549]: time="2025-12-12T18:40:19.417557505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:40:19.547174 containerd[1549]: time="2025-12-12T18:40:19.547064266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:19.548161 containerd[1549]: time="2025-12-12T18:40:19.548078857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:40:19.548161 containerd[1549]: time="2025-12-12T18:40:19.548142489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:19.548370 kubelet[2716]: E1212 18:40:19.548341 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:19.548468 kubelet[2716]: E1212 18:40:19.548453 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:40:19.548659 kubelet[2716]: E1212 18:40:19.548610 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dns7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:19.550810 kubelet[2716]: E1212 18:40:19.550778 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:40:21.416569 containerd[1549]: time="2025-12-12T18:40:21.416491563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:21.548945 containerd[1549]: time="2025-12-12T18:40:21.548147810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:21.549120 containerd[1549]: time="2025-12-12T18:40:21.549048925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:21.549180 containerd[1549]: time="2025-12-12T18:40:21.549088306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:21.549363 kubelet[2716]: E1212 18:40:21.549287 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:21.549715 kubelet[2716]: E1212 18:40:21.549392 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:21.549902 kubelet[2716]: E1212 18:40:21.549853 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:21.551026 kubelet[2716]: E1212 18:40:21.550990 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:40:23.415086 containerd[1549]: time="2025-12-12T18:40:23.414566076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:40:23.555275 containerd[1549]: time="2025-12-12T18:40:23.555145948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:23.556296 containerd[1549]: time="2025-12-12T18:40:23.556184796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:40:23.556344 containerd[1549]: time="2025-12-12T18:40:23.556273589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:40:23.556526 kubelet[2716]: E1212 18:40:23.556494 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:23.557196 kubelet[2716]: E1212 18:40:23.556537 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:40:23.557196 kubelet[2716]: E1212 18:40:23.556643 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59jwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:23.557999 kubelet[2716]: E1212 18:40:23.557892 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:40:24.413555 containerd[1549]: time="2025-12-12T18:40:24.413511597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:40:24.549673 containerd[1549]: time="2025-12-12T18:40:24.549505537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:24.551763 containerd[1549]: time="2025-12-12T18:40:24.550709569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:40:24.551763 containerd[1549]: time="2025-12-12T18:40:24.550852962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:40:24.552077 kubelet[2716]: E1212 18:40:24.552012 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:24.552595 kubelet[2716]: E1212 18:40:24.552383 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:40:24.553194 kubelet[2716]: E1212 18:40:24.552691 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m7fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:24.554421 kubelet[2716]: E1212 18:40:24.554387 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:40:27.414076 kubelet[2716]: E1212 18:40:27.413655 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:40:28.413352 kubelet[2716]: E1212 18:40:28.413280 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:40:32.410951 kubelet[2716]: E1212 18:40:32.410899 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:33.412483 kubelet[2716]: E1212 18:40:33.411505 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:40:36.412167 kubelet[2716]: E1212 18:40:36.412121 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:40:37.414973 kubelet[2716]: E1212 18:40:37.414651 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:40:37.414973 kubelet[2716]: E1212 18:40:37.414730 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:40:39.413743 kubelet[2716]: E1212 18:40:39.413710 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:42.414152 kubelet[2716]: E1212 18:40:42.413788 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:40:42.415705 kubelet[2716]: E1212 18:40:42.415607 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:40:48.414284 kubelet[2716]: E1212 18:40:48.414224 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:40:49.412630 kubelet[2716]: E1212 18:40:49.412577 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:40:49.413362 kubelet[2716]: E1212 18:40:49.413308 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:40:49.414009 kubelet[2716]: E1212 18:40:49.413984 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:40:50.411115 kubelet[2716]: E1212 18:40:50.411053 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:54.411356 kubelet[2716]: E1212 18:40:54.411297 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:40:54.413058 kubelet[2716]: E1212 18:40:54.412981 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:40:57.411993 containerd[1549]: time="2025-12-12T18:40:57.411691449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:40:57.730017 containerd[1549]: time="2025-12-12T18:40:57.729016125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:57.730017 containerd[1549]: time="2025-12-12T18:40:57.729894505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:40:57.730017 containerd[1549]: time="2025-12-12T18:40:57.729974946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:40:57.730516 kubelet[2716]: E1212 18:40:57.730179 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:57.730516 kubelet[2716]: E1212 18:40:57.730228 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:40:57.731493 kubelet[2716]: E1212 18:40:57.731220 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:57.734898 containerd[1549]: time="2025-12-12T18:40:57.734870205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:40:57.878938 containerd[1549]: time="2025-12-12T18:40:57.878777920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:40:57.880820 containerd[1549]: time="2025-12-12T18:40:57.879958335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:40:57.880820 containerd[1549]: time="2025-12-12T18:40:57.880024975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:40:57.881281 kubelet[2716]: E1212 18:40:57.881228 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:57.881430 kubelet[2716]: E1212 18:40:57.881412 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:40:57.881783 kubelet[2716]: E1212 18:40:57.881581 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ht9b9_calico-system(3e77788b-998e-4510-9ab0-47ab12a2af9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:40:57.883310 kubelet[2716]: E1212 18:40:57.883277 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:41:00.411553 kubelet[2716]: E1212 18:41:00.411339 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:00.412200 kubelet[2716]: E1212 18:41:00.412045 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:41:02.411684 kubelet[2716]: E1212 18:41:02.411629 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:41:02.412602 containerd[1549]: time="2025-12-12T18:41:02.411982102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:41:02.539715 containerd[1549]: time="2025-12-12T18:41:02.539657586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:02.540879 containerd[1549]: time="2025-12-12T18:41:02.540828579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:41:02.541996 containerd[1549]: time="2025-12-12T18:41:02.541956931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:41:02.543518 kubelet[2716]: E1212 18:41:02.543306 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:02.543518 kubelet[2716]: E1212 18:41:02.543357 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:02.543518 kubelet[2716]: E1212 18:41:02.543480 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-bth42_calico-apiserver(ca927433-737e-4f41-bcbf-8431c7f3c6dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:02.544896 kubelet[2716]: E1212 18:41:02.544862 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:41:03.411888 containerd[1549]: time="2025-12-12T18:41:03.411803134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:41:03.537316 containerd[1549]: time="2025-12-12T18:41:03.537269769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:03.538864 containerd[1549]: time="2025-12-12T18:41:03.538723874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:41:03.538864 containerd[1549]: time="2025-12-12T18:41:03.538783665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:41:03.539072 kubelet[2716]: E1212 18:41:03.538974 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:41:03.539072 kubelet[2716]: E1212 18:41:03.539018 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:41:03.539432 kubelet[2716]: E1212 18:41:03.539123 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dns7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n6lhs_calico-system(80609a1f-fd0a-4b4b-a327-8d66d4e6cb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:03.540543 kubelet[2716]: E1212 18:41:03.540492 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:41:09.414026 containerd[1549]: time="2025-12-12T18:41:09.413719690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:41:09.543712 containerd[1549]: time="2025-12-12T18:41:09.543665718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:09.545044 containerd[1549]: time="2025-12-12T18:41:09.544880969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:41:09.545102 containerd[1549]: time="2025-12-12T18:41:09.545086081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:41:09.545554 kubelet[2716]: E1212 18:41:09.545499 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:41:09.545554 kubelet[2716]: E1212 18:41:09.545551 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:41:09.546031 kubelet[2716]: E1212 18:41:09.545701 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a642342dbf7c4c169bfaf0e4fa62e16b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:09.548965 containerd[1549]: time="2025-12-12T18:41:09.548931138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:41:09.681131 containerd[1549]: time="2025-12-12T18:41:09.680992336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:09.681982 containerd[1549]: time="2025-12-12T18:41:09.681941045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:41:09.682048 containerd[1549]: time="2025-12-12T18:41:09.681955196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:41:09.682233 kubelet[2716]: E1212 18:41:09.682192 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:41:09.682304 kubelet[2716]: E1212 18:41:09.682247 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:41:09.682394 kubelet[2716]: E1212 18:41:09.682352 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lf82w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7ff5dd56b5-bnjr8_calico-system(6d633c24-655b-48ca-8fdb-e7be6b544554): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:09.683750 kubelet[2716]: E1212 18:41:09.683697 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:41:10.412312 kubelet[2716]: E1212 18:41:10.412216 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:41:12.414125 containerd[1549]: time="2025-12-12T18:41:12.413398972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:41:12.550628 containerd[1549]: time="2025-12-12T18:41:12.550565103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:12.551760 containerd[1549]: time="2025-12-12T18:41:12.551723303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:41:12.551805 containerd[1549]: time="2025-12-12T18:41:12.551796404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:41:12.552084 kubelet[2716]: E1212 18:41:12.552039 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:41:12.552646 kubelet[2716]: E1212 18:41:12.552111 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:41:12.552646 kubelet[2716]: E1212 18:41:12.552433 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7m7fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7854cf6c79-2dk2d_calico-system(b00bfe6d-78a3-4353-8888-bffa785c4bed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:12.554715 kubelet[2716]: E1212 18:41:12.554676 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:41:15.415010 containerd[1549]: time="2025-12-12T18:41:15.414754198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:41:15.418018 kubelet[2716]: E1212 18:41:15.416085 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:41:15.555778 containerd[1549]: time="2025-12-12T18:41:15.555720562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:41:15.556605 containerd[1549]: time="2025-12-12T18:41:15.556568130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:41:15.556679 containerd[1549]: time="2025-12-12T18:41:15.556655530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:41:15.558699 kubelet[2716]: E1212 18:41:15.558633 2716 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:15.558699 kubelet[2716]: E1212 18:41:15.558681 2716 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:41:15.558861 kubelet[2716]: E1212 18:41:15.558794 2716 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59jwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5b588865-cwzws_calico-apiserver(d8ce7370-ee37-4b30-a101-cbc03d0825dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:41:15.560540 kubelet[2716]: E1212 18:41:15.559890 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:41:17.272180 systemd[1]: Started sshd@7-172.237.133.204:22-139.178.68.195:53516.service - OpenSSH per-connection server daemon (139.178.68.195:53516). Dec 12 18:41:17.415940 kubelet[2716]: E1212 18:41:17.414426 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:41:17.643014 sshd[5079]: Accepted publickey for core from 139.178.68.195 port 53516 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:17.646807 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:17.658979 systemd-logind[1526]: New session 8 of user core. Dec 12 18:41:17.662347 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:41:17.971309 sshd[5082]: Connection closed by 139.178.68.195 port 53516 Dec 12 18:41:17.973466 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:17.978191 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:41:17.981382 systemd[1]: sshd@7-172.237.133.204:22-139.178.68.195:53516.service: Deactivated successfully. Dec 12 18:41:17.985861 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:41:17.988884 systemd-logind[1526]: Removed session 8. Dec 12 18:41:18.411694 kubelet[2716]: E1212 18:41:18.410948 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:19.411748 kubelet[2716]: E1212 18:41:19.411331 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:21.412186 kubelet[2716]: E1212 18:41:21.412126 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:41:22.413124 kubelet[2716]: E1212 18:41:22.413022 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:41:23.036081 systemd[1]: Started sshd@8-172.237.133.204:22-139.178.68.195:33006.service - OpenSSH per-connection server daemon (139.178.68.195:33006). Dec 12 18:41:23.390716 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 33006 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:23.392033 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:23.398899 systemd-logind[1526]: New session 9 of user core. Dec 12 18:41:23.406239 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:41:23.788814 sshd[5105]: Connection closed by 139.178.68.195 port 33006 Dec 12 18:41:23.788624 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:23.795519 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:41:23.796544 systemd[1]: sshd@8-172.237.133.204:22-139.178.68.195:33006.service: Deactivated successfully. Dec 12 18:41:23.800572 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:41:23.808281 systemd-logind[1526]: Removed session 9. Dec 12 18:41:23.850751 systemd[1]: Started sshd@9-172.237.133.204:22-139.178.68.195:33012.service - OpenSSH per-connection server daemon (139.178.68.195:33012). Dec 12 18:41:24.192976 sshd[5117]: Accepted publickey for core from 139.178.68.195 port 33012 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:24.194143 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:24.199814 systemd-logind[1526]: New session 10 of user core. Dec 12 18:41:24.205034 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:41:24.413297 kubelet[2716]: E1212 18:41:24.413006 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:41:24.559023 sshd[5120]: Connection closed by 139.178.68.195 port 33012 Dec 12 18:41:24.560052 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:24.569965 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:41:24.574060 systemd[1]: sshd@9-172.237.133.204:22-139.178.68.195:33012.service: Deactivated successfully. Dec 12 18:41:24.579733 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:41:24.585403 systemd-logind[1526]: Removed session 10. Dec 12 18:41:24.624383 systemd[1]: Started sshd@10-172.237.133.204:22-139.178.68.195:33022.service - OpenSSH per-connection server daemon (139.178.68.195:33022). Dec 12 18:41:24.983517 sshd[5130]: Accepted publickey for core from 139.178.68.195 port 33022 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:24.985796 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:24.992833 systemd-logind[1526]: New session 11 of user core. Dec 12 18:41:24.998046 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:41:25.333323 sshd[5133]: Connection closed by 139.178.68.195 port 33022 Dec 12 18:41:25.334252 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:25.339552 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:41:25.340532 systemd[1]: sshd@10-172.237.133.204:22-139.178.68.195:33022.service: Deactivated successfully. Dec 12 18:41:25.344726 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:41:25.350385 systemd-logind[1526]: Removed session 11. Dec 12 18:41:29.412378 kubelet[2716]: E1212 18:41:29.412336 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:30.396446 systemd[1]: Started sshd@11-172.237.133.204:22-139.178.68.195:37982.service - OpenSSH per-connection server daemon (139.178.68.195:37982). Dec 12 18:41:30.415678 kubelet[2716]: E1212 18:41:30.414760 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:41:30.415678 kubelet[2716]: E1212 18:41:30.414978 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:41:30.415678 kubelet[2716]: E1212 18:41:30.415101 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:41:30.757190 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 37982 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:30.758766 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:30.764202 systemd-logind[1526]: New session 12 of user core. Dec 12 18:41:30.772043 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:41:31.103107 sshd[5153]: Connection closed by 139.178.68.195 port 37982 Dec 12 18:41:31.103873 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:31.108491 systemd[1]: sshd@11-172.237.133.204:22-139.178.68.195:37982.service: Deactivated successfully. Dec 12 18:41:31.111615 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:41:31.112796 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:41:31.114594 systemd-logind[1526]: Removed session 12. Dec 12 18:41:33.414610 kubelet[2716]: E1212 18:41:33.414505 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:41:36.175268 systemd[1]: Started sshd@12-172.237.133.204:22-139.178.68.195:37986.service - OpenSSH per-connection server daemon (139.178.68.195:37986). Dec 12 18:41:36.412059 kubelet[2716]: E1212 18:41:36.411724 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:41:36.533401 sshd[5188]: Accepted publickey for core from 139.178.68.195 port 37986 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:36.536442 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:36.541662 systemd-logind[1526]: New session 13 of user core. Dec 12 18:41:36.549050 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:41:36.874027 sshd[5191]: Connection closed by 139.178.68.195 port 37986 Dec 12 18:41:36.874830 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:36.882637 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:41:36.885352 systemd[1]: sshd@12-172.237.133.204:22-139.178.68.195:37986.service: Deactivated successfully. Dec 12 18:41:36.891257 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:41:36.893905 systemd-logind[1526]: Removed session 13. Dec 12 18:41:37.413033 kubelet[2716]: E1212 18:41:37.412971 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:41:41.937593 systemd[1]: Started sshd@13-172.237.133.204:22-139.178.68.195:34970.service - OpenSSH per-connection server daemon (139.178.68.195:34970). Dec 12 18:41:42.283519 sshd[5206]: Accepted publickey for core from 139.178.68.195 port 34970 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:42.285523 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:42.293659 systemd-logind[1526]: New session 14 of user core. Dec 12 18:41:42.299163 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:41:42.603091 sshd[5209]: Connection closed by 139.178.68.195 port 34970 Dec 12 18:41:42.603584 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:42.608857 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:41:42.611405 systemd[1]: sshd@13-172.237.133.204:22-139.178.68.195:34970.service: Deactivated successfully. Dec 12 18:41:42.615507 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:41:42.618249 systemd-logind[1526]: Removed session 14. Dec 12 18:41:42.662111 systemd[1]: Started sshd@14-172.237.133.204:22-139.178.68.195:34984.service - OpenSSH per-connection server daemon (139.178.68.195:34984). Dec 12 18:41:43.004030 sshd[5221]: Accepted publickey for core from 139.178.68.195 port 34984 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:43.003672 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:43.011169 systemd-logind[1526]: New session 15 of user core. Dec 12 18:41:43.018185 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:41:43.416194 kubelet[2716]: E1212 18:41:43.416135 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:41:43.418585 kubelet[2716]: E1212 18:41:43.418535 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:41:43.641110 sshd[5224]: Connection closed by 139.178.68.195 port 34984 Dec 12 18:41:43.643112 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:43.649876 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:41:43.650421 systemd[1]: sshd@14-172.237.133.204:22-139.178.68.195:34984.service: Deactivated successfully. Dec 12 18:41:43.654499 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:41:43.658188 systemd-logind[1526]: Removed session 15. Dec 12 18:41:43.706607 systemd[1]: Started sshd@15-172.237.133.204:22-139.178.68.195:34986.service - OpenSSH per-connection server daemon (139.178.68.195:34986). Dec 12 18:41:44.060777 sshd[5234]: Accepted publickey for core from 139.178.68.195 port 34986 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:44.062757 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:44.067981 systemd-logind[1526]: New session 16 of user core. Dec 12 18:41:44.075067 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:41:44.942591 sshd[5237]: Connection closed by 139.178.68.195 port 34986 Dec 12 18:41:44.943560 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:44.949544 systemd[1]: sshd@15-172.237.133.204:22-139.178.68.195:34986.service: Deactivated successfully. Dec 12 18:41:44.952659 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:41:44.954198 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:41:44.956786 systemd-logind[1526]: Removed session 16. Dec 12 18:41:45.003183 systemd[1]: Started sshd@16-172.237.133.204:22-139.178.68.195:35000.service - OpenSSH per-connection server daemon (139.178.68.195:35000). Dec 12 18:41:45.354309 sshd[5255]: Accepted publickey for core from 139.178.68.195 port 35000 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:45.357003 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:45.363018 systemd-logind[1526]: New session 17 of user core. Dec 12 18:41:45.370033 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:41:45.413443 kubelet[2716]: E1212 18:41:45.413392 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:41:45.774592 sshd[5258]: Connection closed by 139.178.68.195 port 35000 Dec 12 18:41:45.774870 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:45.780676 systemd[1]: sshd@16-172.237.133.204:22-139.178.68.195:35000.service: Deactivated successfully. Dec 12 18:41:45.783556 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:41:45.784979 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:41:45.786512 systemd-logind[1526]: Removed session 17. Dec 12 18:41:45.837267 systemd[1]: Started sshd@17-172.237.133.204:22-139.178.68.195:35002.service - OpenSSH per-connection server daemon (139.178.68.195:35002). Dec 12 18:41:46.187784 sshd[5268]: Accepted publickey for core from 139.178.68.195 port 35002 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:46.189735 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:46.195768 systemd-logind[1526]: New session 18 of user core. Dec 12 18:41:46.204218 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:41:46.506600 sshd[5271]: Connection closed by 139.178.68.195 port 35002 Dec 12 18:41:46.507309 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:46.513129 systemd[1]: sshd@17-172.237.133.204:22-139.178.68.195:35002.service: Deactivated successfully. Dec 12 18:41:46.515716 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:41:46.516879 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:41:46.520401 systemd-logind[1526]: Removed session 18. Dec 12 18:41:47.414181 kubelet[2716]: E1212 18:41:47.413906 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:41:47.414181 kubelet[2716]: E1212 18:41:47.414125 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:41:50.410852 kubelet[2716]: E1212 18:41:50.410814 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:51.424463 kubelet[2716]: E1212 18:41:51.424407 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:41:51.569445 systemd[1]: Started sshd@18-172.237.133.204:22-139.178.68.195:49090.service - OpenSSH per-connection server daemon (139.178.68.195:49090). Dec 12 18:41:51.919790 sshd[5285]: Accepted publickey for core from 139.178.68.195 port 49090 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:51.921455 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:51.927082 systemd-logind[1526]: New session 19 of user core. Dec 12 18:41:51.933207 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:41:52.230035 sshd[5288]: Connection closed by 139.178.68.195 port 49090 Dec 12 18:41:52.230877 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:52.235466 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:41:52.236318 systemd[1]: sshd@18-172.237.133.204:22-139.178.68.195:49090.service: Deactivated successfully. Dec 12 18:41:52.242205 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:41:52.245296 systemd-logind[1526]: Removed session 19. Dec 12 18:41:52.411665 kubelet[2716]: E1212 18:41:52.411119 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:54.412251 kubelet[2716]: E1212 18:41:54.412011 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54" Dec 12 18:41:57.300347 systemd[1]: Started sshd@19-172.237.133.204:22-139.178.68.195:49100.service - OpenSSH per-connection server daemon (139.178.68.195:49100). Dec 12 18:41:57.415101 kubelet[2716]: E1212 18:41:57.415039 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-bth42" podUID="ca927433-737e-4f41-bcbf-8431c7f3c6dc" Dec 12 18:41:57.670156 sshd[5300]: Accepted publickey for core from 139.178.68.195 port 49100 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:41:57.673000 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:41:57.679111 systemd-logind[1526]: New session 20 of user core. Dec 12 18:41:57.686190 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:41:57.980006 sshd[5303]: Connection closed by 139.178.68.195 port 49100 Dec 12 18:41:57.980558 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Dec 12 18:41:57.985848 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:41:57.986393 systemd[1]: sshd@19-172.237.133.204:22-139.178.68.195:49100.service: Deactivated successfully. Dec 12 18:41:57.990708 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:41:57.995237 systemd-logind[1526]: Removed session 20. Dec 12 18:41:59.412396 kubelet[2716]: E1212 18:41:59.412356 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Dec 12 18:41:59.414419 kubelet[2716]: E1212 18:41:59.414347 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5b588865-cwzws" podUID="d8ce7370-ee37-4b30-a101-cbc03d0825dd" Dec 12 18:41:59.416296 kubelet[2716]: E1212 18:41:59.416249 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ht9b9" podUID="3e77788b-998e-4510-9ab0-47ab12a2af9d" Dec 12 18:42:02.411460 kubelet[2716]: E1212 18:42:02.411413 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7854cf6c79-2dk2d" podUID="b00bfe6d-78a3-4353-8888-bffa785c4bed" Dec 12 18:42:03.048121 systemd[1]: Started sshd@20-172.237.133.204:22-139.178.68.195:41236.service - OpenSSH per-connection server daemon (139.178.68.195:41236). Dec 12 18:42:03.400941 sshd[5317]: Accepted publickey for core from 139.178.68.195 port 41236 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:03.401871 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:03.410925 systemd-logind[1526]: New session 21 of user core. Dec 12 18:42:03.416383 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:42:03.420659 kubelet[2716]: E1212 18:42:03.420617 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7ff5dd56b5-bnjr8" podUID="6d633c24-655b-48ca-8fdb-e7be6b544554" Dec 12 18:42:03.774069 sshd[5320]: Connection closed by 139.178.68.195 port 41236 Dec 12 18:42:03.775477 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:03.781879 systemd[1]: sshd@20-172.237.133.204:22-139.178.68.195:41236.service: Deactivated successfully. Dec 12 18:42:03.785199 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:42:03.786800 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:42:03.791438 systemd-logind[1526]: Removed session 21. Dec 12 18:42:05.412961 kubelet[2716]: E1212 18:42:05.412081 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n6lhs" podUID="80609a1f-fd0a-4b4b-a327-8d66d4e6cb54"