Jan 23 01:03:12.950899 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:03:12.950925 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:03:12.950934 kernel: BIOS-provided physical RAM map: Jan 23 01:03:12.950940 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 01:03:12.950946 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 01:03:12.950952 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:03:12.950962 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 01:03:12.950968 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 01:03:12.950974 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:03:12.950981 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:03:12.950987 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:03:12.950993 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:03:12.950999 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 01:03:12.951005 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:03:12.951015 kernel: NX (Execute Disable) protection: active Jan 23 01:03:12.951022 kernel: APIC: Static calls initialized Jan 23 01:03:12.951028 kernel: SMBIOS 2.8 present. Jan 23 01:03:12.951035 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 01:03:12.951041 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:03:12.951048 kernel: Hypervisor detected: KVM Jan 23 01:03:12.951056 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:03:12.951063 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:03:12.951069 kernel: kvm-clock: using sched offset of 7386997715 cycles Jan 23 01:03:12.951076 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:03:12.951083 kernel: tsc: Detected 2000.000 MHz processor Jan 23 01:03:12.951090 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:03:12.951097 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:03:12.951103 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 01:03:12.951110 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:03:12.951117 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:03:12.951126 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:03:12.951132 kernel: Using GB pages for direct mapping Jan 23 01:03:12.951139 kernel: ACPI: Early table checksum verification disabled Jan 23 01:03:12.951145 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 01:03:12.951152 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951158 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951165 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951172 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 01:03:12.951178 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951188 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951198 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951205 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:03:12.951212 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 01:03:12.951219 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 01:03:12.951228 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 01:03:12.951235 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 01:03:12.951242 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 01:03:12.951249 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 01:03:12.951256 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 01:03:12.951262 kernel: No NUMA configuration found Jan 23 01:03:12.951269 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 01:03:12.951276 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 01:03:12.951283 kernel: Zone ranges: Jan 23 01:03:12.951292 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:03:12.951299 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:03:12.951306 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:03:12.956396 kernel: Device empty Jan 23 01:03:12.956406 kernel: Movable zone start for each node Jan 23 01:03:12.956414 kernel: Early memory node ranges Jan 23 01:03:12.956422 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:03:12.956429 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 01:03:12.956436 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:03:12.956443 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 01:03:12.956456 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:03:12.956463 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:03:12.956471 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 01:03:12.956478 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:03:12.956485 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:03:12.956493 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:03:12.956500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:03:12.956507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:03:12.956514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:03:12.956523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:03:12.956531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:03:12.956538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:03:12.956545 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:03:12.956552 kernel: TSC deadline timer available Jan 23 01:03:12.956559 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:03:12.956567 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:03:12.956574 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:03:12.956580 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:03:12.956590 kernel: CPU topo: Num. cores per package: 2 Jan 23 01:03:12.956598 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:03:12.956605 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:03:12.956612 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:03:12.956619 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:03:12.956626 kernel: kvm-guest: setup PV sched yield Jan 23 01:03:12.956633 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:03:12.956640 kernel: Booting paravirtualized kernel on KVM Jan 23 01:03:12.956647 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:03:12.956657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:03:12.956665 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:03:12.956672 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:03:12.956679 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:03:12.956686 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:03:12.956694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:03:12.956702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:03:12.956710 kernel: random: crng init done Jan 23 01:03:12.956720 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:03:12.956727 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:03:12.956734 kernel: Fallback order for Node 0: 0 Jan 23 01:03:12.956742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 01:03:12.956749 kernel: Policy zone: Normal Jan 23 01:03:12.956756 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:03:12.956763 kernel: software IO TLB: area num 2. Jan 23 01:03:12.956771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:03:12.956778 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:03:12.956788 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:03:12.956795 kernel: Dynamic Preempt: voluntary Jan 23 01:03:12.956801 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:03:12.956809 kernel: rcu: RCU event tracing is enabled. Jan 23 01:03:12.956816 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:03:12.956824 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:03:12.956831 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:03:12.956839 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:03:12.956846 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:03:12.956853 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:03:12.956863 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:03:12.956878 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:03:12.956888 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:03:12.956896 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:03:12.956903 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:03:12.956911 kernel: Console: colour VGA+ 80x25 Jan 23 01:03:12.956918 kernel: printk: legacy console [tty0] enabled Jan 23 01:03:12.956925 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:03:12.956933 kernel: ACPI: Core revision 20240827 Jan 23 01:03:12.956943 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:03:12.956951 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:03:12.956958 kernel: x2apic enabled Jan 23 01:03:12.956966 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:03:12.956974 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:03:12.956981 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:03:12.956989 kernel: kvm-guest: setup PV IPIs Jan 23 01:03:12.956999 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:03:12.957007 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 23 01:03:12.957015 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 23 01:03:12.957022 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:03:12.957029 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:03:12.957037 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:03:12.957045 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:03:12.957052 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:03:12.957059 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:03:12.957070 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 01:03:12.957077 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:03:12.957085 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:03:12.957092 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:03:12.957100 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:03:12.957108 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:03:12.957115 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:03:12.957123 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:03:12.957133 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:03:12.957140 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:03:12.957147 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:03:12.957155 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:03:12.957162 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:03:12.957169 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:03:12.957177 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 01:03:12.957184 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 01:03:12.957192 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:03:12.957201 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:03:12.957209 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:03:12.957216 kernel: landlock: Up and running. Jan 23 01:03:12.957223 kernel: SELinux: Initializing. Jan 23 01:03:12.957231 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:03:12.957238 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:03:12.957246 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:03:12.957253 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 01:03:12.957261 kernel: ... version: 0 Jan 23 01:03:12.957270 kernel: ... bit width: 48 Jan 23 01:03:12.957277 kernel: ... generic registers: 6 Jan 23 01:03:12.957285 kernel: ... value mask: 0000ffffffffffff Jan 23 01:03:12.957293 kernel: ... max period: 00007fffffffffff Jan 23 01:03:12.957300 kernel: ... fixed-purpose events: 0 Jan 23 01:03:12.957324 kernel: ... event mask: 000000000000003f Jan 23 01:03:12.957332 kernel: signal: max sigframe size: 3376 Jan 23 01:03:12.957339 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:03:12.957347 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:03:12.957357 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:03:12.957364 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:03:12.957372 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:03:12.957379 kernel: .... node #0, CPUs: #1 Jan 23 01:03:12.957386 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:03:12.957394 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 23 01:03:12.957403 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 235480K reserved, 0K cma-reserved) Jan 23 01:03:12.957410 kernel: devtmpfs: initialized Jan 23 01:03:12.957418 kernel: x86/mm: Memory block size: 128MB Jan 23 01:03:12.957428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:03:12.957435 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:03:12.957442 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:03:12.957450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:03:12.957457 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:03:12.957464 kernel: audit: type=2000 audit(1769130190.157:1): state=initialized audit_enabled=0 res=1 Jan 23 01:03:12.957471 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:03:12.957483 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:03:12.957491 kernel: cpuidle: using governor menu Jan 23 01:03:12.957500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:03:12.957511 kernel: dca service started, version 1.12.1 Jan 23 01:03:12.957519 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:03:12.957526 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:03:12.957534 kernel: PCI: Using configuration type 1 for base access Jan 23 01:03:12.957541 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:03:12.957548 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:03:12.957556 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:03:12.957563 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:03:12.957573 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:03:12.957580 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:03:12.957587 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:03:12.957594 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:03:12.957602 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:03:12.957609 kernel: ACPI: Interpreter enabled Jan 23 01:03:12.957616 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:03:12.957623 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:03:12.957630 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:03:12.957640 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:03:12.957647 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:03:12.957654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:03:12.957863 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:03:12.957998 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:03:12.958122 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:03:12.958133 kernel: PCI host bridge to bus 0000:00 Jan 23 01:03:12.958272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:03:12.958431 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:03:12.958547 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:03:12.958658 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 01:03:12.958768 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:03:12.958878 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 01:03:12.959079 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:03:12.959235 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:03:12.959450 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:03:12.959579 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 01:03:12.959702 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 01:03:12.959823 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 01:03:12.959943 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:03:12.960082 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:03:12.960205 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 01:03:12.960374 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 01:03:12.960501 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 01:03:12.960635 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:03:12.960757 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 01:03:12.960878 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 01:03:12.961005 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 01:03:12.961126 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 01:03:12.961306 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:03:12.961455 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:03:12.961589 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:03:12.961712 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 01:03:12.961832 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 01:03:12.961968 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:03:12.962091 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:03:12.962101 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:03:12.962109 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:03:12.962116 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:03:12.962124 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:03:12.962131 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:03:12.962142 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:03:12.962150 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:03:12.962157 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:03:12.962165 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:03:12.962172 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:03:12.962180 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:03:12.962187 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:03:12.962194 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:03:12.962202 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:03:12.962211 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:03:12.962218 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:03:12.962225 kernel: iommu: Default domain type: Translated Jan 23 01:03:12.962269 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:03:12.962277 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:03:12.962284 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:03:12.962291 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 01:03:12.962298 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 01:03:12.964480 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:03:12.964617 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:03:12.964740 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:03:12.964750 kernel: vgaarb: loaded Jan 23 01:03:12.964758 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:03:12.964765 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:03:12.964773 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:03:12.964781 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:03:12.964788 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:03:12.964796 kernel: pnp: PnP ACPI init Jan 23 01:03:12.964937 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:03:12.964949 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:03:12.964957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:03:12.964964 kernel: NET: Registered PF_INET protocol family Jan 23 01:03:12.964972 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:03:12.964979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:03:12.964987 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:03:12.964994 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:03:12.965005 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:03:12.965013 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:03:12.965020 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:03:12.965028 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:03:12.965035 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:03:12.965042 kernel: NET: Registered PF_XDP protocol family Jan 23 01:03:12.965169 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:03:12.965282 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:03:12.965417 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:03:12.965536 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 01:03:12.965646 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:03:12.965757 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 01:03:12.965767 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:03:12.965775 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:03:12.965782 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 01:03:12.965789 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 23 01:03:12.965797 kernel: Initialise system trusted keyrings Jan 23 01:03:12.965808 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:03:12.965816 kernel: Key type asymmetric registered Jan 23 01:03:12.965823 kernel: Asymmetric key parser 'x509' registered Jan 23 01:03:12.965830 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:03:12.965838 kernel: io scheduler mq-deadline registered Jan 23 01:03:12.965845 kernel: io scheduler kyber registered Jan 23 01:03:12.965853 kernel: io scheduler bfq registered Jan 23 01:03:12.965860 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:03:12.965868 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:03:12.965878 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:03:12.965886 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:03:12.965893 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:03:12.965901 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:03:12.965908 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:03:12.965915 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:03:12.965923 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:03:12.966059 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:03:12.966183 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:03:12.966298 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:03:12 UTC (1769130192) Jan 23 01:03:12.968458 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 01:03:12.968472 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:03:12.968480 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:03:12.968488 kernel: Segment Routing with IPv6 Jan 23 01:03:12.968495 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:03:12.968503 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:03:12.968510 kernel: Key type dns_resolver registered Jan 23 01:03:12.968522 kernel: IPI shorthand broadcast: enabled Jan 23 01:03:12.968530 kernel: sched_clock: Marking stable (2909004009, 347670041)->(3349383985, -92709935) Jan 23 01:03:12.968537 kernel: registered taskstats version 1 Jan 23 01:03:12.968544 kernel: Loading compiled-in X.509 certificates Jan 23 01:03:12.968552 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:03:12.968560 kernel: Demotion targets for Node 0: null Jan 23 01:03:12.968568 kernel: Key type .fscrypt registered Jan 23 01:03:12.968575 kernel: Key type fscrypt-provisioning registered Jan 23 01:03:12.968583 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:03:12.968593 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:03:12.968601 kernel: ima: No architecture policies found Jan 23 01:03:12.968608 kernel: clk: Disabling unused clocks Jan 23 01:03:12.968615 kernel: Warning: unable to open an initial console. Jan 23 01:03:12.968623 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:03:12.968631 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:03:12.968638 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:03:12.968646 kernel: Run /init as init process Jan 23 01:03:12.968653 kernel: with arguments: Jan 23 01:03:12.968664 kernel: /init Jan 23 01:03:12.968671 kernel: with environment: Jan 23 01:03:12.968698 kernel: HOME=/ Jan 23 01:03:12.968708 kernel: TERM=linux Jan 23 01:03:12.968717 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:03:12.968728 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:03:12.968737 systemd[1]: Detected virtualization kvm. Jan 23 01:03:12.968747 systemd[1]: Detected architecture x86-64. Jan 23 01:03:12.968755 systemd[1]: Running in initrd. Jan 23 01:03:12.968763 systemd[1]: No hostname configured, using default hostname. Jan 23 01:03:12.968771 systemd[1]: Hostname set to . Jan 23 01:03:12.968779 systemd[1]: Initializing machine ID from random generator. Jan 23 01:03:12.968787 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:03:12.968795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:03:12.968803 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:03:12.968814 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:03:12.968823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:03:12.968830 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:03:12.968839 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:03:12.968848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:03:12.968856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:03:12.968864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:03:12.968875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:03:12.968883 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:03:12.968891 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:03:12.968898 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:03:12.968906 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:03:12.968931 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:03:12.968943 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:03:12.968951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:03:12.968959 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:03:12.968970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:03:12.968979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:03:12.968990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:03:12.968998 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:03:12.969006 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:03:12.969017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:03:12.969025 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:03:12.969034 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:03:12.969042 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:03:12.969050 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:03:12.969057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:03:12.969065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:03:12.969073 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:03:12.969084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:03:12.969116 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 01:03:12.969139 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:03:12.969148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:03:12.969157 systemd-journald[187]: Journal started Jan 23 01:03:12.969174 systemd-journald[187]: Runtime Journal (/run/log/journal/437fed93bf9f4b2caaff284c92dd731c) is 8M, max 78.2M, 70.2M free. Jan 23 01:03:12.942469 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 01:03:12.974708 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:03:12.984140 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:03:13.094607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:03:13.094661 kernel: Bridge firewalling registered Jan 23 01:03:13.008286 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 01:03:13.096088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:03:13.101454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:03:13.103192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:03:13.108467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:03:13.110251 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:03:13.114927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:03:13.122061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:03:13.126043 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:03:13.136861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:03:13.143607 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:03:13.147694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:03:13.149767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:03:13.155433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:03:13.182889 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:03:13.192997 systemd-resolved[222]: Positive Trust Anchors: Jan 23 01:03:13.193897 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:03:13.193925 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:03:13.197619 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 23 01:03:13.201272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:03:13.202487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:03:13.295375 kernel: SCSI subsystem initialized Jan 23 01:03:13.304405 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:03:13.316367 kernel: iscsi: registered transport (tcp) Jan 23 01:03:13.337934 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:03:13.337987 kernel: QLogic iSCSI HBA Driver Jan 23 01:03:13.363129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:03:13.382750 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:03:13.386669 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:03:13.440749 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:03:13.443267 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:03:13.499352 kernel: raid6: avx2x4 gen() 31770 MB/s Jan 23 01:03:13.517334 kernel: raid6: avx2x2 gen() 31431 MB/s Jan 23 01:03:13.535630 kernel: raid6: avx2x1 gen() 23274 MB/s Jan 23 01:03:13.535653 kernel: raid6: using algorithm avx2x4 gen() 31770 MB/s Jan 23 01:03:13.557186 kernel: raid6: .... xor() 4706 MB/s, rmw enabled Jan 23 01:03:13.557211 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:03:13.577343 kernel: xor: automatically using best checksumming function avx Jan 23 01:03:13.718367 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:03:13.728137 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:03:13.731733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:03:13.755277 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 01:03:13.761266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:03:13.763451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:03:13.784917 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Jan 23 01:03:13.811241 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:03:13.813878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:03:13.886963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:03:13.891204 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:03:13.979344 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:03:13.989149 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:03:13.995458 kernel: libata version 3.00 loaded. Jan 23 01:03:13.995428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:03:13.995572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:03:14.002371 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 01:03:13.997397 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:03:14.005511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:03:14.243661 kernel: AES CTR mode by8 optimization enabled Jan 23 01:03:14.243688 kernel: scsi host0: Virtio SCSI HBA Jan 23 01:03:14.243897 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 01:03:14.244506 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:03:14.244686 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:03:14.244705 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:03:14.244853 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:03:14.244999 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:03:14.245148 kernel: scsi host1: ahci Jan 23 01:03:14.245445 kernel: scsi host2: ahci Jan 23 01:03:14.250146 kernel: scsi host3: ahci Jan 23 01:03:14.244439 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:03:14.255335 kernel: scsi host4: ahci Jan 23 01:03:14.263336 kernel: scsi host5: ahci Jan 23 01:03:14.267240 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 01:03:14.271825 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 01:03:14.272015 kernel: scsi host6: ahci Jan 23 01:03:14.272176 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 01:03:14.272372 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Jan 23 01:03:14.272392 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 01:03:14.275782 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 01:03:14.275961 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Jan 23 01:03:14.284535 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Jan 23 01:03:14.287830 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Jan 23 01:03:14.287854 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Jan 23 01:03:14.287873 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Jan 23 01:03:14.292341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:03:14.292363 kernel: GPT:9289727 != 167739391 Jan 23 01:03:14.292375 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:03:14.292385 kernel: GPT:9289727 != 167739391 Jan 23 01:03:14.292394 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:03:14.292404 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:03:14.294339 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 01:03:14.418894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:03:14.601993 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.602099 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.602123 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.602738 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.605361 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.607345 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:03:14.676788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 01:03:14.686540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 01:03:14.695546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:03:14.704723 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 01:03:14.706433 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 01:03:14.707676 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:03:14.711695 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:03:14.712580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:03:14.714266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:03:14.716905 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:03:14.720477 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:03:14.733059 disk-uuid[612]: Primary Header is updated. Jan 23 01:03:14.733059 disk-uuid[612]: Secondary Entries is updated. Jan 23 01:03:14.733059 disk-uuid[612]: Secondary Header is updated. Jan 23 01:03:14.743464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:03:14.745808 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:03:14.760347 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:03:14.832401 (udev-worker)[482]: sda9: Failed to create/update device symlink '/dev/disk/by-path/pci-0000:00:02.0-scsi-0:0:0:0-part/by-label/ROOT', ignoring: No such file or directory Jan 23 01:03:15.767124 disk-uuid[613]: The operation has completed successfully. Jan 23 01:03:15.769444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:03:15.816330 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:03:15.816481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:03:15.849075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:03:15.863213 sh[634]: Success Jan 23 01:03:15.887446 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:03:15.887485 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:03:15.891209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:03:15.901339 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:03:15.944090 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:03:15.948402 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:03:15.964065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:03:15.977345 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (646) Jan 23 01:03:15.977387 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:03:15.980899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:03:15.993442 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:03:15.993476 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:03:15.997519 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:03:15.998909 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:03:16.000111 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:03:16.001301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:03:16.002106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:03:16.005424 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:03:16.038546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (679) Jan 23 01:03:16.038581 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:03:16.041760 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:03:16.051765 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:03:16.051790 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:03:16.051802 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:03:16.061375 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:03:16.063875 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:03:16.065667 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:03:16.169784 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:03:16.174459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:03:16.185911 ignition[746]: Ignition 2.22.0 Jan 23 01:03:16.185929 ignition[746]: Stage: fetch-offline Jan 23 01:03:16.185964 ignition[746]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:16.185975 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:16.186253 ignition[746]: parsed url from cmdline: "" Jan 23 01:03:16.192063 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:03:16.186257 ignition[746]: no config URL provided Jan 23 01:03:16.186263 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:03:16.186272 ignition[746]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:03:16.186277 ignition[746]: failed to fetch config: resource requires networking Jan 23 01:03:16.186545 ignition[746]: Ignition finished successfully Jan 23 01:03:16.220061 systemd-networkd[820]: lo: Link UP Jan 23 01:03:16.220076 systemd-networkd[820]: lo: Gained carrier Jan 23 01:03:16.221719 systemd-networkd[820]: Enumeration completed Jan 23 01:03:16.221794 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:03:16.222868 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:03:16.222873 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:03:16.224070 systemd[1]: Reached target network.target - Network. Jan 23 01:03:16.226669 systemd-networkd[820]: eth0: Link UP Jan 23 01:03:16.226833 systemd-networkd[820]: eth0: Gained carrier Jan 23 01:03:16.226857 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:03:16.229196 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:03:16.263487 ignition[825]: Ignition 2.22.0 Jan 23 01:03:16.263503 ignition[825]: Stage: fetch Jan 23 01:03:16.263628 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:16.263641 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:16.263729 ignition[825]: parsed url from cmdline: "" Jan 23 01:03:16.263733 ignition[825]: no config URL provided Jan 23 01:03:16.263740 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:03:16.263750 ignition[825]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:03:16.263779 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 01:03:16.264119 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:03:16.465014 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 01:03:16.465191 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:03:16.865547 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 01:03:16.865739 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:03:16.985404 systemd-networkd[820]: eth0: DHCPv4 address 172.239.192.168/24, gateway 172.239.192.1 acquired from 23.205.167.127 Jan 23 01:03:17.666506 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 23 01:03:17.756590 systemd-networkd[820]: eth0: Gained IPv6LL Jan 23 01:03:17.763121 ignition[825]: PUT result: OK Jan 23 01:03:17.763189 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 01:03:17.872493 ignition[825]: GET result: OK Jan 23 01:03:17.872655 ignition[825]: parsing config with SHA512: 90451c0fa1bfed4af05e83d2e2b554f30596eccb2db8b1e66150a6ed1a5cc82210372f04cdf9edc0c89212bfda96892da21a457db0487049138c6c16aff536e5 Jan 23 01:03:17.878556 unknown[825]: fetched base config from "system" Jan 23 01:03:17.878833 ignition[825]: fetch: fetch complete Jan 23 01:03:17.878566 unknown[825]: fetched base config from "system" Jan 23 01:03:17.878839 ignition[825]: fetch: fetch passed Jan 23 01:03:17.878572 unknown[825]: fetched user config from "akamai" Jan 23 01:03:17.878884 ignition[825]: Ignition finished successfully Jan 23 01:03:17.881864 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:03:17.884472 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:03:17.919759 ignition[832]: Ignition 2.22.0 Jan 23 01:03:17.919776 ignition[832]: Stage: kargs Jan 23 01:03:17.919903 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:17.919914 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:17.923669 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:03:17.920606 ignition[832]: kargs: kargs passed Jan 23 01:03:17.920648 ignition[832]: Ignition finished successfully Jan 23 01:03:17.926971 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:03:17.952742 ignition[839]: Ignition 2.22.0 Jan 23 01:03:17.952754 ignition[839]: Stage: disks Jan 23 01:03:17.952890 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:17.952902 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:17.953920 ignition[839]: disks: disks passed Jan 23 01:03:17.955612 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:03:17.953968 ignition[839]: Ignition finished successfully Jan 23 01:03:17.957442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:03:17.958480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:03:17.959969 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:03:17.961303 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:03:17.962873 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:03:17.965542 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:03:17.987613 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:03:17.990522 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:03:17.994750 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:03:18.100355 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:03:18.101329 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:03:18.103528 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:03:18.105894 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:03:18.109389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:03:18.112823 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:03:18.112886 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:03:18.112918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:03:18.123899 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:03:18.126461 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:03:18.129006 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Jan 23 01:03:18.138482 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:03:18.138516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:03:18.144335 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:03:18.144365 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:03:18.147714 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:03:18.149495 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:03:18.190367 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:03:18.195389 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:03:18.200682 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:03:18.204841 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:03:18.294520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:03:18.296901 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:03:18.299135 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:03:18.319740 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:03:18.324355 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:03:18.337347 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:03:18.353684 ignition[968]: INFO : Ignition 2.22.0 Jan 23 01:03:18.353684 ignition[968]: INFO : Stage: mount Jan 23 01:03:18.356011 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:18.356011 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:18.356011 ignition[968]: INFO : mount: mount passed Jan 23 01:03:18.356011 ignition[968]: INFO : Ignition finished successfully Jan 23 01:03:18.356150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:03:18.360395 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:03:19.103257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:03:19.139394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Jan 23 01:03:19.146535 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:03:19.146587 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:03:19.151665 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:03:19.151688 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:03:19.155926 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:03:19.158386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:03:19.199536 ignition[996]: INFO : Ignition 2.22.0 Jan 23 01:03:19.199536 ignition[996]: INFO : Stage: files Jan 23 01:03:19.202030 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:19.202030 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:19.202030 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:03:19.202030 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:03:19.206344 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:03:19.206344 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:03:19.206344 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:03:19.206344 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:03:19.206054 unknown[996]: wrote ssh authorized keys file for user: core Jan 23 01:03:19.212122 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:03:19.212122 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:03:19.418696 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:03:19.501001 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:03:19.502710 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:03:19.532812 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 01:03:19.980365 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:03:20.589574 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:03:20.589574 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:03:20.592980 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:03:20.592980 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:03:20.592980 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:03:20.592980 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 01:03:20.592980 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:03:20.602048 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:03:20.602048 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 01:03:20.602048 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:03:20.602048 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:03:20.602048 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:03:20.602048 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:03:20.602048 ignition[996]: INFO : files: files passed Jan 23 01:03:20.602048 ignition[996]: INFO : Ignition finished successfully Jan 23 01:03:20.598826 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:03:20.603462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:03:20.607519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:03:20.612260 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:03:20.612414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:03:20.628953 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:03:20.630242 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf Jan 23 01:03:20.631414 initrd-setup-root-after-ignition[1030]: grep: Jan 23 01:03:20.632391 initrd-setup-root-after-ignition[1026]: : No such file or directory Jan 23 01:03:20.632391 initrd-setup-root-after-ignition[1030]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:03:20.634977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:03:20.636493 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:03:20.638636 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:03:20.698391 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:03:20.698545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:03:20.700579 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:03:20.701913 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:03:20.703613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:03:20.704383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:03:20.727657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:03:20.730494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:03:20.751167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:03:20.752381 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:03:20.754076 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:03:20.755695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:03:20.755841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:03:20.757602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:03:20.758711 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:03:20.760342 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:03:20.761823 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:03:20.763329 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:03:20.764983 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:03:20.766652 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:03:20.768288 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:03:20.769999 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:03:20.771587 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:03:20.773195 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:03:20.774792 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:03:20.774932 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:03:20.776701 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:03:20.777781 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:03:20.779269 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:03:20.779685 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:03:20.780919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:03:20.781015 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:03:20.783654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:03:20.783836 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:03:20.785294 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:03:20.785413 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:03:20.788417 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:03:20.791808 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:03:20.793551 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:03:20.793708 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:03:20.795758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:03:20.795896 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:03:20.832926 ignition[1050]: INFO : Ignition 2.22.0 Jan 23 01:03:20.832926 ignition[1050]: INFO : Stage: umount Jan 23 01:03:20.832926 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:03:20.832926 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:03:20.832926 ignition[1050]: INFO : umount: umount passed Jan 23 01:03:20.832926 ignition[1050]: INFO : Ignition finished successfully Jan 23 01:03:20.807476 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:03:20.807585 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:03:20.831895 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:03:20.832004 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:03:20.838161 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:03:20.838213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:03:20.840131 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:03:20.840183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:03:20.840897 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:03:20.840952 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:03:20.843413 systemd[1]: Stopped target network.target - Network. Jan 23 01:03:20.846355 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:03:20.846411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:03:20.849634 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:03:20.851204 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:03:20.851257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:03:20.852644 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:03:20.854020 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:03:20.855664 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:03:20.855708 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:03:20.857085 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:03:20.857136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:03:20.858545 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:03:20.858598 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:03:20.859985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:03:20.860033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:03:20.861609 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:03:20.863067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:03:20.865887 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:03:20.866545 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:03:20.866650 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:03:20.869515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:03:20.869603 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:03:20.871496 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:03:20.871647 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:03:20.875748 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:03:20.876169 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:03:20.876365 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:03:20.879140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:03:20.879973 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:03:20.881084 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:03:20.881129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:03:20.884416 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:03:20.886475 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:03:20.886530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:03:20.887286 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:03:20.887353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:03:20.889440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:03:20.889492 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:03:20.890351 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:03:20.890428 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:03:20.892428 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:03:20.895834 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:03:20.895899 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:03:20.907251 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:03:20.907458 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:03:20.908814 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:03:20.909009 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:03:20.910905 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:03:20.910973 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:03:20.912077 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:03:20.912116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:03:20.913626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:03:20.913675 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:03:20.915875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:03:20.915940 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:03:20.917521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:03:20.917573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:03:20.921419 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:03:20.923031 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:03:20.923085 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:03:20.925396 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:03:20.925449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:03:20.928533 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:03:20.928584 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:03:20.930925 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:03:20.930972 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:03:20.932711 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:03:20.932763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:03:20.936507 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:03:20.936563 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:03:20.936607 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:03:20.936656 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:03:20.938624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:03:20.938727 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:03:20.940080 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:03:20.942015 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:03:20.956795 systemd[1]: Switching root. Jan 23 01:03:21.006278 systemd-journald[187]: Journal stopped Jan 23 01:03:22.195732 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 01:03:22.195763 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:03:22.195776 kernel: SELinux: policy capability open_perms=1 Jan 23 01:03:22.195786 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:03:22.195794 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:03:22.195806 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:03:22.195816 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:03:22.195825 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:03:22.195835 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:03:22.195844 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:03:22.195854 kernel: audit: type=1403 audit(1769130201.156:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:03:22.195864 systemd[1]: Successfully loaded SELinux policy in 65.993ms. Jan 23 01:03:22.195878 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.559ms. Jan 23 01:03:22.195889 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:03:22.195900 systemd[1]: Detected virtualization kvm. Jan 23 01:03:22.195910 systemd[1]: Detected architecture x86-64. Jan 23 01:03:22.195924 systemd[1]: Detected first boot. Jan 23 01:03:22.195935 systemd[1]: Initializing machine ID from random generator. Jan 23 01:03:22.195945 zram_generator::config[1093]: No configuration found. Jan 23 01:03:22.195956 kernel: Guest personality initialized and is inactive Jan 23 01:03:22.195966 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:03:22.195975 kernel: Initialized host personality Jan 23 01:03:22.195985 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:03:22.195995 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:03:22.196008 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:03:22.196018 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:03:22.196028 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:03:22.196038 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:03:22.196048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:03:22.196059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:03:22.196069 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:03:22.196081 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:03:22.196092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:03:22.196102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:03:22.196112 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:03:22.196122 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:03:22.196133 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:03:22.196144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:03:22.196154 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:03:22.196167 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:03:22.196180 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:03:22.196191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:03:22.196201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:03:22.196212 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:03:22.196222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:03:22.196232 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:03:22.196245 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:03:22.196255 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:03:22.196266 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:03:22.196276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:03:22.196286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:03:22.196297 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:03:22.196325 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:03:22.196337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:03:22.196348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:03:22.196361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:03:22.196372 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:03:22.196383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:03:22.196394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:03:22.196407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:03:22.196418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:03:22.196428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:03:22.196439 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:03:22.196449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:22.196459 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:03:22.196470 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:03:22.196480 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:03:22.196493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:03:22.196504 systemd[1]: Reached target machines.target - Containers. Jan 23 01:03:22.196514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:03:22.196525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:03:22.196535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:03:22.196546 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:03:22.196556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:03:22.196567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:03:22.196577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:03:22.196590 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:03:22.196601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:03:22.196611 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:03:22.196623 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:03:22.196633 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:03:22.196644 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:03:22.196654 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:03:22.196665 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:03:22.196678 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:03:22.196689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:03:22.196699 kernel: fuse: init (API version 7.41) Jan 23 01:03:22.196709 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:03:22.196720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:03:22.196730 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:03:22.196740 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:03:22.196772 systemd-journald[1185]: Collecting audit messages is disabled. Jan 23 01:03:22.196797 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:03:22.196809 systemd[1]: Stopped verity-setup.service. Jan 23 01:03:22.196819 kernel: ACPI: bus type drm_connector registered Jan 23 01:03:22.196830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:22.196843 systemd-journald[1185]: Journal started Jan 23 01:03:22.196863 systemd-journald[1185]: Runtime Journal (/run/log/journal/0fb855f81a8142e9b399d3c8b892ae11) is 8M, max 78.2M, 70.2M free. Jan 23 01:03:22.201382 kernel: loop: module loaded Jan 23 01:03:21.814347 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:03:21.839163 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 01:03:21.839673 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:03:22.211513 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:03:22.213945 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:03:22.217791 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:03:22.219670 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:03:22.220596 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:03:22.221439 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:03:22.222499 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:03:22.223675 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:03:22.224797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:03:22.225903 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:03:22.226113 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:03:22.227279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:03:22.227573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:03:22.228896 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:03:22.229132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:03:22.230260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:03:22.230553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:03:22.231821 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:03:22.232086 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:03:22.233132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:03:22.233428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:03:22.234578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:03:22.235666 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:03:22.236889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:03:22.238017 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:03:22.250865 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:03:22.254393 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:03:22.256252 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:03:22.258116 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:03:22.258146 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:03:22.259815 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:03:22.267432 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:03:22.271409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:03:22.273553 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:03:22.282485 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:03:22.283483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:03:22.287854 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:03:22.288892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:03:22.290459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:03:22.295080 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:03:22.303913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:03:22.310092 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:03:22.313696 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:03:22.334404 systemd-journald[1185]: Time spent on flushing to /var/log/journal/0fb855f81a8142e9b399d3c8b892ae11 is 60.412ms for 1013 entries. Jan 23 01:03:22.334404 systemd-journald[1185]: System Journal (/var/log/journal/0fb855f81a8142e9b399d3c8b892ae11) is 8M, max 195.6M, 187.6M free. Jan 23 01:03:22.415187 systemd-journald[1185]: Received client request to flush runtime journal. Jan 23 01:03:22.415686 kernel: loop0: detected capacity change from 0 to 229808 Jan 23 01:03:22.415715 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:03:22.342372 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:03:22.344001 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:03:22.348490 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:03:22.356481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:03:22.393652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:03:22.396891 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:03:22.407377 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jan 23 01:03:22.407389 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Jan 23 01:03:22.419972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:03:22.421391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:03:22.426471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:03:22.445334 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 01:03:22.466886 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:03:22.471443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:03:22.488333 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 01:03:22.510227 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 23 01:03:22.510548 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 23 01:03:22.515443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:03:22.533348 kernel: loop3: detected capacity change from 0 to 8 Jan 23 01:03:22.558396 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 01:03:22.583344 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:03:22.604338 kernel: loop6: detected capacity change from 0 to 128560 Jan 23 01:03:22.628433 kernel: loop7: detected capacity change from 0 to 8 Jan 23 01:03:22.630093 (sd-merge)[1248]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 01:03:22.631179 (sd-merge)[1248]: Merged extensions into '/usr'. Jan 23 01:03:22.637904 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:03:22.637922 systemd[1]: Reloading... Jan 23 01:03:22.768337 zram_generator::config[1277]: No configuration found. Jan 23 01:03:22.843045 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:03:22.965435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:03:22.966146 systemd[1]: Reloading finished in 326 ms. Jan 23 01:03:22.999896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:03:23.001521 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:03:23.002908 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:03:23.022106 systemd[1]: Starting ensure-sysext.service... Jan 23 01:03:23.024097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:03:23.028645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:03:23.053774 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:03:23.053879 systemd[1]: Reloading... Jan 23 01:03:23.057619 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:03:23.057819 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:03:23.058127 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:03:23.058418 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:03:23.059551 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:03:23.059801 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 01:03:23.059877 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 01:03:23.064377 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:03:23.064392 systemd-tmpfiles[1319]: Skipping /boot Jan 23 01:03:23.076132 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:03:23.076152 systemd-tmpfiles[1319]: Skipping /boot Jan 23 01:03:23.101635 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jan 23 01:03:23.161366 zram_generator::config[1347]: No configuration found. Jan 23 01:03:23.440469 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:03:23.446353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:03:23.488644 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:03:23.493349 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:03:23.488966 systemd[1]: Reloading finished in 434 ms. Jan 23 01:03:23.504546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:03:23.510361 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:03:23.522372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:03:23.550348 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:03:23.571853 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.574416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:03:23.578559 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:03:23.579727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:03:23.581558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:03:23.586637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:03:23.596357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:03:23.597422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:03:23.597524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:03:23.601258 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:03:23.606640 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:03:23.613670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:03:23.625447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:03:23.627439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.631217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:03:23.631807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:03:23.641852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.642284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:03:23.660930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:03:23.662987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:03:23.663102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:03:23.663189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.664072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:03:23.666911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:03:23.669686 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:03:23.681154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.681421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:03:23.686579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:03:23.689575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:03:23.691491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:03:23.691602 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:03:23.693331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:03:23.699573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:03:23.701371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:03:23.702695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:03:23.704512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:03:23.705946 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:03:23.707394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:03:23.713739 systemd[1]: Finished ensure-sysext.service. Jan 23 01:03:23.742772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:03:23.757029 augenrules[1480]: No rules Jan 23 01:03:23.757497 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:03:23.759768 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:03:23.760057 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:03:23.761755 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:03:23.763717 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:03:23.763937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:03:23.767901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:03:23.768142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:03:23.772337 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:03:23.773646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:03:23.778582 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:03:23.788492 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:03:23.795093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:03:23.833197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:03:23.839629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:03:23.869879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:03:23.891648 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:03:23.919267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:03:24.072727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:03:24.076143 systemd-networkd[1444]: lo: Link UP Jan 23 01:03:24.076152 systemd-networkd[1444]: lo: Gained carrier Jan 23 01:03:24.077892 systemd-networkd[1444]: Enumeration completed Jan 23 01:03:24.077974 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:03:24.081670 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:03:24.081752 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:03:24.082522 systemd-networkd[1444]: eth0: Link UP Jan 23 01:03:24.082760 systemd-networkd[1444]: eth0: Gained carrier Jan 23 01:03:24.082820 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:03:24.083175 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:03:24.087504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:03:24.096720 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:03:24.097745 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:03:24.099693 systemd-resolved[1445]: Positive Trust Anchors: Jan 23 01:03:24.099713 systemd-resolved[1445]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:03:24.099741 systemd-resolved[1445]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:03:24.103870 systemd-resolved[1445]: Defaulting to hostname 'linux'. Jan 23 01:03:24.105625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:03:24.106517 systemd[1]: Reached target network.target - Network. Jan 23 01:03:24.107179 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:03:24.107945 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:03:24.108770 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:03:24.109885 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:03:24.110672 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:03:24.111595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:03:24.112544 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:03:24.113497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:03:24.114496 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:03:24.114533 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:03:24.115263 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:03:24.117768 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:03:24.141776 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:03:24.144376 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:03:24.145393 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:03:24.146144 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:03:24.149383 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:03:24.150439 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:03:24.152398 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:03:24.153341 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:03:24.155699 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:03:24.156850 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:03:24.157625 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:03:24.157667 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:03:24.158949 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:03:24.164518 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:03:24.168107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:03:24.171738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:03:24.175642 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:03:24.185868 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:03:24.187704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:03:24.191274 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:03:24.198436 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:03:24.205958 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:03:24.210617 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jan 23 01:03:24.210628 oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jan 23 01:03:24.211396 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:03:24.215747 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting Jan 23 01:03:24.215740 oslogin_cache_refresh[1526]: Failure getting users, quitting Jan 23 01:03:24.215829 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:03:24.215829 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache Jan 23 01:03:24.215759 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:03:24.215803 oslogin_cache_refresh[1526]: Refreshing group entry cache Jan 23 01:03:24.216779 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting Jan 23 01:03:24.216779 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:03:24.216473 oslogin_cache_refresh[1526]: Failure getting groups, quitting Jan 23 01:03:24.216484 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:03:24.217678 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:03:24.230476 jq[1522]: false Jan 23 01:03:24.231199 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:03:24.232869 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:03:24.233411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:03:24.236674 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:03:24.240489 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:03:24.249498 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:03:24.250716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:03:24.251012 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:03:24.251719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:03:24.252391 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:03:24.254506 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:03:24.255515 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:03:24.256850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:03:24.257627 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:03:24.285509 extend-filesystems[1523]: Found /dev/sda6 Jan 23 01:03:24.289533 jq[1543]: true Jan 23 01:03:24.305516 update_engine[1541]: I20260123 01:03:24.305447 1541 main.cc:92] Flatcar Update Engine starting Jan 23 01:03:24.312259 extend-filesystems[1523]: Found /dev/sda9 Jan 23 01:03:24.317135 tar[1553]: linux-amd64/LICENSE Jan 23 01:03:24.318408 tar[1553]: linux-amd64/helm Jan 23 01:03:24.319744 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:03:24.324236 extend-filesystems[1523]: Checking size of /dev/sda9 Jan 23 01:03:24.332771 dbus-daemon[1520]: [system] SELinux support is enabled Jan 23 01:03:24.332947 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:03:24.336210 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:03:24.336253 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:03:24.337064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:03:24.337099 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:03:24.342297 update_engine[1541]: I20260123 01:03:24.342161 1541 update_check_scheduler.cc:74] Next update check in 9m42s Jan 23 01:03:24.342430 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:03:24.364747 jq[1559]: true Jan 23 01:03:24.371502 extend-filesystems[1523]: Resized partition /dev/sda9 Jan 23 01:03:24.374564 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:03:24.376291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:03:24.378116 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 01:03:24.403699 systemd-logind[1535]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:03:24.403741 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:03:24.404364 systemd-logind[1535]: New seat seat0. Jan 23 01:03:24.409580 coreos-metadata[1519]: Jan 23 01:03:24.409 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:03:24.413853 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:03:24.485695 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:03:24.487402 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:03:24.496845 systemd[1]: Starting sshkeys.service... Jan 23 01:03:24.561977 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:03:24.566607 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:03:24.691743 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 01:03:24.693469 containerd[1557]: time="2026-01-23T01:03:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:03:24.702929 coreos-metadata[1599]: Jan 23 01:03:24.702 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:03:24.704302 containerd[1557]: time="2026-01-23T01:03:24.704075156Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:03:24.705012 extend-filesystems[1568]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 01:03:24.705012 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 01:03:24.705012 extend-filesystems[1568]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 01:03:24.715747 extend-filesystems[1523]: Resized filesystem in /dev/sda9 Jan 23 01:03:24.707894 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:03:24.708388 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:03:24.720570 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:03:24.724130 locksmithd[1565]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.725850005Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.53µs" Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.725885065Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.725907885Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726088245Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726105545Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726135775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726206535Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726221785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726885555Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726902355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726918575Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728183 containerd[1557]: time="2026-01-23T01:03:24.726926715Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.727027645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.727285685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.727725234Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.727738564Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.727775814Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.728059304Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:03:24.728465 containerd[1557]: time="2026-01-23T01:03:24.728130024Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:03:24.733364 containerd[1557]: time="2026-01-23T01:03:24.733269622Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:03:24.733484 containerd[1557]: time="2026-01-23T01:03:24.733434732Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:03:24.733484 containerd[1557]: time="2026-01-23T01:03:24.733460832Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:03:24.733587 containerd[1557]: time="2026-01-23T01:03:24.733568252Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:03:24.733670 containerd[1557]: time="2026-01-23T01:03:24.733654422Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:03:24.733727 containerd[1557]: time="2026-01-23T01:03:24.733714411Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:03:24.733777 containerd[1557]: time="2026-01-23T01:03:24.733765501Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:03:24.733832 containerd[1557]: time="2026-01-23T01:03:24.733820601Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:03:24.733875 containerd[1557]: time="2026-01-23T01:03:24.733864471Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:03:24.733915 containerd[1557]: time="2026-01-23T01:03:24.733904971Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:03:24.734155 containerd[1557]: time="2026-01-23T01:03:24.734143401Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:03:24.734206 containerd[1557]: time="2026-01-23T01:03:24.734194881Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:03:24.734377 containerd[1557]: time="2026-01-23T01:03:24.734361311Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:03:24.734770 containerd[1557]: time="2026-01-23T01:03:24.734753461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:03:24.734828 containerd[1557]: time="2026-01-23T01:03:24.734816291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:03:24.735068 containerd[1557]: time="2026-01-23T01:03:24.735057031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:03:24.735110 containerd[1557]: time="2026-01-23T01:03:24.735100121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:03:24.735167 containerd[1557]: time="2026-01-23T01:03:24.735153821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:03:24.735212 containerd[1557]: time="2026-01-23T01:03:24.735202071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:03:24.735360 containerd[1557]: time="2026-01-23T01:03:24.735345211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:03:24.735456 containerd[1557]: time="2026-01-23T01:03:24.735443341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:03:24.736100 containerd[1557]: time="2026-01-23T01:03:24.735675520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:03:24.736100 containerd[1557]: time="2026-01-23T01:03:24.735692020Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:03:24.736100 containerd[1557]: time="2026-01-23T01:03:24.735941500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:03:24.736100 containerd[1557]: time="2026-01-23T01:03:24.735958010Z" level=info msg="Start snapshots syncer" Jan 23 01:03:24.736905 containerd[1557]: time="2026-01-23T01:03:24.736887760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:03:24.737553 containerd[1557]: time="2026-01-23T01:03:24.737169150Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:03:24.737553 containerd[1557]: time="2026-01-23T01:03:24.737217480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:03:24.742227 containerd[1557]: time="2026-01-23T01:03:24.742103097Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:03:24.742409 containerd[1557]: time="2026-01-23T01:03:24.742391707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:03:24.742643 containerd[1557]: time="2026-01-23T01:03:24.742626977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:03:24.742740 containerd[1557]: time="2026-01-23T01:03:24.742727027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:03:24.742904 containerd[1557]: time="2026-01-23T01:03:24.742893027Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:03:24.742966 containerd[1557]: time="2026-01-23T01:03:24.742952537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:03:24.743010 containerd[1557]: time="2026-01-23T01:03:24.742999447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:03:24.743050 containerd[1557]: time="2026-01-23T01:03:24.743040447Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:03:24.743101 containerd[1557]: time="2026-01-23T01:03:24.743090267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:03:24.743190 containerd[1557]: time="2026-01-23T01:03:24.743177477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:03:24.743385 containerd[1557]: time="2026-01-23T01:03:24.743223997Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:03:24.743458 containerd[1557]: time="2026-01-23T01:03:24.743442617Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743597117Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743613327Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743624117Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743631977Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743642107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743658837Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743680276Z" level=info msg="runtime interface created" Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743686036Z" level=info msg="created NRI interface" Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743697726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743709646Z" level=info msg="Connect containerd service" Jan 23 01:03:24.744254 containerd[1557]: time="2026-01-23T01:03:24.743732466Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:03:24.746376 containerd[1557]: time="2026-01-23T01:03:24.746355865Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:03:24.752349 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:03:24.757387 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:03:24.782293 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:03:24.784113 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:03:24.787634 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:03:24.821175 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:03:24.826609 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:03:24.829953 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:03:24.831608 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:03:24.842402 systemd-networkd[1444]: eth0: DHCPv4 address 172.239.192.168/24, gateway 172.239.192.1 acquired from 23.205.167.127 Jan 23 01:03:24.843704 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1444 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:03:24.843299 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Jan 23 01:03:24.849832 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:03:24.867086 containerd[1557]: time="2026-01-23T01:03:24.867056545Z" level=info msg="Start subscribing containerd event" Jan 23 01:03:24.867190 containerd[1557]: time="2026-01-23T01:03:24.867165005Z" level=info msg="Start recovering state" Jan 23 01:03:24.867372 containerd[1557]: time="2026-01-23T01:03:24.867299785Z" level=info msg="Start event monitor" Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868342924Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868367684Z" level=info msg="Start streaming server" Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868377804Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868385464Z" level=info msg="runtime interface starting up..." Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868391084Z" level=info msg="starting plugins..." Jan 23 01:03:24.868895 containerd[1557]: time="2026-01-23T01:03:24.868407254Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:03:24.869120 containerd[1557]: time="2026-01-23T01:03:24.869103144Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:03:24.869233 containerd[1557]: time="2026-01-23T01:03:24.869207494Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:03:24.869422 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:03:24.870305 containerd[1557]: time="2026-01-23T01:03:24.870289873Z" level=info msg="containerd successfully booted in 0.177642s" Jan 23 01:03:24.934875 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:03:24.937042 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:03:24.937702 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1636 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:03:24.942795 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:03:25.815627 systemd-resolved[1445]: Clock change detected. Flushing caches. Jan 23 01:03:25.818194 tar[1553]: linux-amd64/README.md Jan 23 01:03:25.819429 systemd-timesyncd[1484]: Contacted time server 45.79.13.206:123 (0.flatcar.pool.ntp.org). Jan 23 01:03:25.819997 systemd-timesyncd[1484]: Initial clock synchronization to Fri 2026-01-23 01:03:25.815346 UTC. Jan 23 01:03:25.830092 polkitd[1637]: Started polkitd version 126 Jan 23 01:03:25.834672 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:03:25.834927 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:03:25.834977 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:03:25.835180 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:03:25.835210 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:03:25.835243 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:03:25.836836 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:03:25.836973 polkitd[1637]: Finished loading, compiling and executing 2 rules Jan 23 01:03:25.838394 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:03:25.838885 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:03:25.839876 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:03:25.848119 systemd-hostnamed[1636]: Hostname set to <172-239-192-168> (transient) Jan 23 01:03:25.848409 systemd-resolved[1445]: System hostname changed to '172-239-192-168'. Jan 23 01:03:26.228202 coreos-metadata[1519]: Jan 23 01:03:26.228 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:03:26.307482 systemd-networkd[1444]: eth0: Gained IPv6LL Jan 23 01:03:26.310220 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:03:26.313014 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:03:26.317514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:26.323601 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:03:26.323735 coreos-metadata[1519]: Jan 23 01:03:26.323 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 01:03:26.349175 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:03:26.504317 coreos-metadata[1519]: Jan 23 01:03:26.503 INFO Fetch successful Jan 23 01:03:26.504317 coreos-metadata[1519]: Jan 23 01:03:26.503 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 01:03:26.519953 coreos-metadata[1599]: Jan 23 01:03:26.519 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:03:26.609145 coreos-metadata[1599]: Jan 23 01:03:26.609 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 01:03:26.745236 coreos-metadata[1599]: Jan 23 01:03:26.745 INFO Fetch successful Jan 23 01:03:26.767492 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:03:26.767796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:03:26.770345 systemd[1]: Finished sshkeys.service. Jan 23 01:03:26.770613 coreos-metadata[1519]: Jan 23 01:03:26.770 INFO Fetch successful Jan 23 01:03:26.863672 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:03:26.864866 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:03:27.182718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:27.184553 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:03:27.186121 systemd[1]: Startup finished in 2.981s (kernel) + 8.496s (initrd) + 5.287s (userspace) = 16.765s. Jan 23 01:03:27.189586 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:03:27.681121 kubelet[1694]: E0123 01:03:27.681052 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:03:27.684590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:03:27.685020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:03:27.685625 systemd[1]: kubelet.service: Consumed 836ms CPU time, 268.3M memory peak. Jan 23 01:03:28.690068 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:03:28.691185 systemd[1]: Started sshd@0-172.239.192.168:22-68.220.241.50:55376.service - OpenSSH per-connection server daemon (68.220.241.50:55376). Jan 23 01:03:28.872224 sshd[1706]: Accepted publickey for core from 68.220.241.50 port 55376 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:28.873753 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:28.879802 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:03:28.881054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:03:28.888172 systemd-logind[1535]: New session 1 of user core. Jan 23 01:03:28.898864 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:03:28.902005 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:03:28.911667 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:03:28.914223 systemd-logind[1535]: New session c1 of user core. Jan 23 01:03:29.045006 systemd[1711]: Queued start job for default target default.target. Jan 23 01:03:29.051472 systemd[1711]: Created slice app.slice - User Application Slice. Jan 23 01:03:29.051499 systemd[1711]: Reached target paths.target - Paths. Jan 23 01:03:29.051539 systemd[1711]: Reached target timers.target - Timers. Jan 23 01:03:29.052942 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:03:29.068136 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:03:29.068491 systemd[1711]: Reached target sockets.target - Sockets. Jan 23 01:03:29.068591 systemd[1711]: Reached target basic.target - Basic System. Jan 23 01:03:29.068808 systemd[1711]: Reached target default.target - Main User Target. Jan 23 01:03:29.068912 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:03:29.069004 systemd[1711]: Startup finished in 148ms. Jan 23 01:03:29.070254 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:03:29.216722 systemd[1]: Started sshd@1-172.239.192.168:22-68.220.241.50:55380.service - OpenSSH per-connection server daemon (68.220.241.50:55380). Jan 23 01:03:29.396004 sshd[1722]: Accepted publickey for core from 68.220.241.50 port 55380 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:29.397774 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:29.403249 systemd-logind[1535]: New session 2 of user core. Jan 23 01:03:29.408391 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:03:29.534145 sshd[1725]: Connection closed by 68.220.241.50 port 55380 Jan 23 01:03:29.534702 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:29.538804 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:03:29.539682 systemd[1]: sshd@1-172.239.192.168:22-68.220.241.50:55380.service: Deactivated successfully. Jan 23 01:03:29.541870 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:03:29.543467 systemd-logind[1535]: Removed session 2. Jan 23 01:03:29.560532 systemd[1]: Started sshd@2-172.239.192.168:22-68.220.241.50:55382.service - OpenSSH per-connection server daemon (68.220.241.50:55382). Jan 23 01:03:29.724370 sshd[1731]: Accepted publickey for core from 68.220.241.50 port 55382 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:29.725302 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:29.733633 systemd-logind[1535]: New session 3 of user core. Jan 23 01:03:29.738403 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:03:29.847576 sshd[1734]: Connection closed by 68.220.241.50 port 55382 Jan 23 01:03:29.848089 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:29.852309 systemd[1]: sshd@2-172.239.192.168:22-68.220.241.50:55382.service: Deactivated successfully. Jan 23 01:03:29.854439 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:03:29.855124 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:03:29.856361 systemd-logind[1535]: Removed session 3. Jan 23 01:03:29.883051 systemd[1]: Started sshd@3-172.239.192.168:22-68.220.241.50:55384.service - OpenSSH per-connection server daemon (68.220.241.50:55384). Jan 23 01:03:30.054321 sshd[1740]: Accepted publickey for core from 68.220.241.50 port 55384 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:30.055445 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:30.062638 systemd-logind[1535]: New session 4 of user core. Jan 23 01:03:30.074454 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:03:30.197047 sshd[1743]: Connection closed by 68.220.241.50 port 55384 Jan 23 01:03:30.198535 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:30.204333 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:03:30.205164 systemd[1]: sshd@3-172.239.192.168:22-68.220.241.50:55384.service: Deactivated successfully. Jan 23 01:03:30.211129 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:03:30.213326 systemd-logind[1535]: Removed session 4. Jan 23 01:03:30.227620 systemd[1]: Started sshd@4-172.239.192.168:22-68.220.241.50:55394.service - OpenSSH per-connection server daemon (68.220.241.50:55394). Jan 23 01:03:30.402506 sshd[1749]: Accepted publickey for core from 68.220.241.50 port 55394 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:30.403584 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:30.413388 systemd-logind[1535]: New session 5 of user core. Jan 23 01:03:30.422960 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:03:30.524627 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:03:30.524950 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:03:30.540315 sudo[1753]: pam_unix(sudo:session): session closed for user root Jan 23 01:03:30.562099 sshd[1752]: Connection closed by 68.220.241.50 port 55394 Jan 23 01:03:30.563586 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:30.568218 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:03:30.568489 systemd[1]: sshd@4-172.239.192.168:22-68.220.241.50:55394.service: Deactivated successfully. Jan 23 01:03:30.570823 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:03:30.572693 systemd-logind[1535]: Removed session 5. Jan 23 01:03:30.591639 systemd[1]: Started sshd@5-172.239.192.168:22-68.220.241.50:55400.service - OpenSSH per-connection server daemon (68.220.241.50:55400). Jan 23 01:03:30.753683 sshd[1759]: Accepted publickey for core from 68.220.241.50 port 55400 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:30.755384 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:30.760347 systemd-logind[1535]: New session 6 of user core. Jan 23 01:03:30.767403 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:03:30.865474 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:03:30.865857 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:03:30.875750 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 23 01:03:30.882996 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:03:30.883366 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:03:30.893592 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:03:30.943187 augenrules[1786]: No rules Jan 23 01:03:30.945233 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:03:30.945599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:03:30.947242 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 23 01:03:30.969665 sshd[1762]: Connection closed by 68.220.241.50 port 55400 Jan 23 01:03:30.971451 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:30.975254 systemd[1]: sshd@5-172.239.192.168:22-68.220.241.50:55400.service: Deactivated successfully. Jan 23 01:03:30.977684 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:03:30.979468 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:03:30.981235 systemd-logind[1535]: Removed session 6. Jan 23 01:03:31.001333 systemd[1]: Started sshd@6-172.239.192.168:22-68.220.241.50:55404.service - OpenSSH per-connection server daemon (68.220.241.50:55404). Jan 23 01:03:31.170553 sshd[1795]: Accepted publickey for core from 68.220.241.50 port 55404 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:03:31.172149 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:31.177854 systemd-logind[1535]: New session 7 of user core. Jan 23 01:03:31.183431 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:03:31.286958 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:03:31.287324 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:03:31.583018 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:03:31.590996 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:03:31.796318 dockerd[1817]: time="2026-01-23T01:03:31.796157042Z" level=info msg="Starting up" Jan 23 01:03:31.797419 dockerd[1817]: time="2026-01-23T01:03:31.797364281Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:03:31.812216 dockerd[1817]: time="2026-01-23T01:03:31.812134394Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:03:31.838119 systemd[1]: var-lib-docker-metacopy\x2dcheck2173822007-merged.mount: Deactivated successfully. Jan 23 01:03:31.858499 dockerd[1817]: time="2026-01-23T01:03:31.858069481Z" level=info msg="Loading containers: start." Jan 23 01:03:31.868305 kernel: Initializing XFRM netlink socket Jan 23 01:03:32.128136 systemd-networkd[1444]: docker0: Link UP Jan 23 01:03:32.133032 dockerd[1817]: time="2026-01-23T01:03:32.132995424Z" level=info msg="Loading containers: done." Jan 23 01:03:32.147708 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck640828320-merged.mount: Deactivated successfully. Jan 23 01:03:32.149157 dockerd[1817]: time="2026-01-23T01:03:32.149120255Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:03:32.149232 dockerd[1817]: time="2026-01-23T01:03:32.149190915Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:03:32.149527 dockerd[1817]: time="2026-01-23T01:03:32.149263285Z" level=info msg="Initializing buildkit" Jan 23 01:03:32.168146 dockerd[1817]: time="2026-01-23T01:03:32.168122636Z" level=info msg="Completed buildkit initialization" Jan 23 01:03:32.173817 dockerd[1817]: time="2026-01-23T01:03:32.173622513Z" level=info msg="Daemon has completed initialization" Jan 23 01:03:32.173817 dockerd[1817]: time="2026-01-23T01:03:32.173717963Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:03:32.173770 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:03:33.124958 containerd[1557]: time="2026-01-23T01:03:33.124911067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 01:03:33.905688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223175925.mount: Deactivated successfully. Jan 23 01:03:35.093785 containerd[1557]: time="2026-01-23T01:03:35.093739913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:35.094566 containerd[1557]: time="2026-01-23T01:03:35.094543342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114718" Jan 23 01:03:35.095217 containerd[1557]: time="2026-01-23T01:03:35.095186982Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:35.097101 containerd[1557]: time="2026-01-23T01:03:35.097075531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:35.097955 containerd[1557]: time="2026-01-23T01:03:35.097932401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.972981614s" Jan 23 01:03:35.098023 containerd[1557]: time="2026-01-23T01:03:35.098009511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 01:03:35.098677 containerd[1557]: time="2026-01-23T01:03:35.098632000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 01:03:36.411295 containerd[1557]: time="2026-01-23T01:03:36.411043954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:36.412246 containerd[1557]: time="2026-01-23T01:03:36.412227573Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016787" Jan 23 01:03:36.412668 containerd[1557]: time="2026-01-23T01:03:36.412640493Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:36.414696 containerd[1557]: time="2026-01-23T01:03:36.414669562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:36.415534 containerd[1557]: time="2026-01-23T01:03:36.415512702Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.316738192s" Jan 23 01:03:36.415617 containerd[1557]: time="2026-01-23T01:03:36.415602612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 01:03:36.416484 containerd[1557]: time="2026-01-23T01:03:36.416460341Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 01:03:37.617765 containerd[1557]: time="2026-01-23T01:03:37.617706981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:37.618893 containerd[1557]: time="2026-01-23T01:03:37.618667660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158108" Jan 23 01:03:37.619480 containerd[1557]: time="2026-01-23T01:03:37.619455420Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:37.621641 containerd[1557]: time="2026-01-23T01:03:37.621615189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:37.622765 containerd[1557]: time="2026-01-23T01:03:37.622739158Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.206189797s" Jan 23 01:03:37.622810 containerd[1557]: time="2026-01-23T01:03:37.622767558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 01:03:37.623588 containerd[1557]: time="2026-01-23T01:03:37.623556488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 01:03:37.750929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:03:37.753338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:37.937655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:37.944569 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:03:37.978188 kubelet[2101]: E0123 01:03:37.978134 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:03:37.983704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:03:37.983900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:03:37.984311 systemd[1]: kubelet.service: Consumed 196ms CPU time, 108.7M memory peak. Jan 23 01:03:38.649909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526855726.mount: Deactivated successfully. Jan 23 01:03:39.048127 containerd[1557]: time="2026-01-23T01:03:39.048042335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:39.049035 containerd[1557]: time="2026-01-23T01:03:39.048920135Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930102" Jan 23 01:03:39.049577 containerd[1557]: time="2026-01-23T01:03:39.049536664Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:39.051072 containerd[1557]: time="2026-01-23T01:03:39.051029934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:39.051712 containerd[1557]: time="2026-01-23T01:03:39.051677833Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.428092205s" Jan 23 01:03:39.051797 containerd[1557]: time="2026-01-23T01:03:39.051780943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:03:39.052617 containerd[1557]: time="2026-01-23T01:03:39.052585663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:03:39.545997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915942311.mount: Deactivated successfully. Jan 23 01:03:40.427177 containerd[1557]: time="2026-01-23T01:03:40.427121115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:40.428078 containerd[1557]: time="2026-01-23T01:03:40.428015895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Jan 23 01:03:40.428644 containerd[1557]: time="2026-01-23T01:03:40.428619805Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:40.430918 containerd[1557]: time="2026-01-23T01:03:40.430882264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:40.431869 containerd[1557]: time="2026-01-23T01:03:40.431703663Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.37908576s" Jan 23 01:03:40.431869 containerd[1557]: time="2026-01-23T01:03:40.431729303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:03:40.432696 containerd[1557]: time="2026-01-23T01:03:40.432680733Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:03:40.923577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289646298.mount: Deactivated successfully. Jan 23 01:03:40.927538 containerd[1557]: time="2026-01-23T01:03:40.927498185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:03:40.928128 containerd[1557]: time="2026-01-23T01:03:40.928096375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 01:03:40.929381 containerd[1557]: time="2026-01-23T01:03:40.928361465Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:03:40.929877 containerd[1557]: time="2026-01-23T01:03:40.929846284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:03:40.930541 containerd[1557]: time="2026-01-23T01:03:40.930521184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.731461ms" Jan 23 01:03:40.930615 containerd[1557]: time="2026-01-23T01:03:40.930602174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:03:40.931171 containerd[1557]: time="2026-01-23T01:03:40.931150873Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:03:41.469205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263174502.mount: Deactivated successfully. Jan 23 01:03:43.227802 containerd[1557]: time="2026-01-23T01:03:43.227753425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:43.229117 containerd[1557]: time="2026-01-23T01:03:43.229085764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926233" Jan 23 01:03:43.229573 containerd[1557]: time="2026-01-23T01:03:43.229531214Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:43.231526 containerd[1557]: time="2026-01-23T01:03:43.231493113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:03:43.232961 containerd[1557]: time="2026-01-23T01:03:43.232353143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.30117772s" Jan 23 01:03:43.232961 containerd[1557]: time="2026-01-23T01:03:43.232380402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:03:45.805878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:45.806034 systemd[1]: kubelet.service: Consumed 196ms CPU time, 108.7M memory peak. Jan 23 01:03:45.808132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:45.836222 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-7.scope)... Jan 23 01:03:45.836240 systemd[1]: Reloading... Jan 23 01:03:45.976294 zram_generator::config[2296]: No configuration found. Jan 23 01:03:46.194623 systemd[1]: Reloading finished in 357 ms. Jan 23 01:03:46.242684 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:03:46.242899 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:03:46.243303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:46.243348 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.3M memory peak. Jan 23 01:03:46.245381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:46.430405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:46.439614 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:03:46.477837 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:03:46.477837 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:03:46.477837 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:03:46.477837 kubelet[2355]: I0123 01:03:46.477623 2355 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:03:47.147319 kubelet[2355]: I0123 01:03:47.147290 2355 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:03:47.147451 kubelet[2355]: I0123 01:03:47.147440 2355 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:03:47.147775 kubelet[2355]: I0123 01:03:47.147762 2355 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:03:47.175481 kubelet[2355]: I0123 01:03:47.175455 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:03:47.175687 kubelet[2355]: E0123 01:03:47.175667 2355 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.192.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:03:47.181081 kubelet[2355]: I0123 01:03:47.181064 2355 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:03:47.185066 kubelet[2355]: I0123 01:03:47.185044 2355 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:03:47.185301 kubelet[2355]: I0123 01:03:47.185257 2355 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:03:47.185432 kubelet[2355]: I0123 01:03:47.185298 2355 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-192-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:03:47.185534 kubelet[2355]: I0123 01:03:47.185439 2355 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:03:47.185534 kubelet[2355]: I0123 01:03:47.185448 2355 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:03:47.185587 kubelet[2355]: I0123 01:03:47.185554 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:03:47.188448 kubelet[2355]: I0123 01:03:47.188250 2355 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:03:47.188448 kubelet[2355]: I0123 01:03:47.188296 2355 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:03:47.188448 kubelet[2355]: I0123 01:03:47.188316 2355 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:03:47.188448 kubelet[2355]: I0123 01:03:47.188330 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:03:47.193629 kubelet[2355]: I0123 01:03:47.192958 2355 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:03:47.193629 kubelet[2355]: I0123 01:03:47.193540 2355 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:03:47.195493 kubelet[2355]: W0123 01:03:47.195453 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:03:47.198989 kubelet[2355]: I0123 01:03:47.198181 2355 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:03:47.198989 kubelet[2355]: I0123 01:03:47.198216 2355 server.go:1289] "Started kubelet" Jan 23 01:03:47.198989 kubelet[2355]: E0123 01:03:47.198360 2355 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.192.168:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-192-168&limit=500&resourceVersion=0\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:03:47.200365 kubelet[2355]: E0123 01:03:47.200346 2355 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.192.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:03:47.200525 kubelet[2355]: I0123 01:03:47.200507 2355 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:03:47.207441 kubelet[2355]: I0123 01:03:47.207380 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:03:47.207683 kubelet[2355]: I0123 01:03:47.207662 2355 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:03:47.208218 kubelet[2355]: I0123 01:03:47.208204 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:03:47.211126 kubelet[2355]: I0123 01:03:47.211100 2355 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:03:47.211531 kubelet[2355]: I0123 01:03:47.211483 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:03:47.213115 kubelet[2355]: I0123 01:03:47.212942 2355 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:03:47.213115 kubelet[2355]: E0123 01:03:47.213062 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-192-168\" not found" Jan 23 01:03:47.213494 kubelet[2355]: I0123 01:03:47.213477 2355 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:03:47.213541 kubelet[2355]: I0123 01:03:47.213532 2355 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:03:47.214975 kubelet[2355]: E0123 01:03:47.214451 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.192.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-192-168?timeout=10s\": dial tcp 172.239.192.168:6443: connect: connection refused" interval="200ms" Jan 23 01:03:47.214975 kubelet[2355]: I0123 01:03:47.214707 2355 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:03:47.214975 kubelet[2355]: I0123 01:03:47.214760 2355 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:03:47.216094 kubelet[2355]: E0123 01:03:47.215049 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.192.168:6443/api/v1/namespaces/default/events\": dial tcp 172.239.192.168:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-192-168.188d3691818dfd11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-192-168,UID:172-239-192-168,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-192-168,},FirstTimestamp:2026-01-23 01:03:47.198197009 +0000 UTC m=+0.754381514,LastTimestamp:2026-01-23 01:03:47.198197009 +0000 UTC m=+0.754381514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-192-168,}" Jan 23 01:03:47.217135 kubelet[2355]: E0123 01:03:47.217114 2355 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.192.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:03:47.217518 kubelet[2355]: I0123 01:03:47.217483 2355 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:03:47.220392 kubelet[2355]: I0123 01:03:47.220368 2355 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:03:47.238767 kubelet[2355]: E0123 01:03:47.238742 2355 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:03:47.243038 kubelet[2355]: I0123 01:03:47.243017 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:03:47.243038 kubelet[2355]: I0123 01:03:47.243032 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:03:47.243819 kubelet[2355]: I0123 01:03:47.243792 2355 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:03:47.243819 kubelet[2355]: I0123 01:03:47.243812 2355 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:03:47.243887 kubelet[2355]: I0123 01:03:47.243830 2355 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:03:47.243887 kubelet[2355]: I0123 01:03:47.243836 2355 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:03:47.243887 kubelet[2355]: E0123 01:03:47.243871 2355 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:03:47.244104 kubelet[2355]: I0123 01:03:47.243045 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:03:47.245878 kubelet[2355]: I0123 01:03:47.245859 2355 policy_none.go:49] "None policy: Start" Jan 23 01:03:47.246410 kubelet[2355]: I0123 01:03:47.246057 2355 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:03:47.246410 kubelet[2355]: I0123 01:03:47.246069 2355 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:03:47.247215 kubelet[2355]: E0123 01:03:47.247197 2355 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.192.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:03:47.252542 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:03:47.261073 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:03:47.278984 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:03:47.280479 kubelet[2355]: E0123 01:03:47.280439 2355 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:03:47.280629 kubelet[2355]: I0123 01:03:47.280604 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:03:47.280670 kubelet[2355]: I0123 01:03:47.280620 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:03:47.281756 kubelet[2355]: I0123 01:03:47.281398 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:03:47.282711 kubelet[2355]: E0123 01:03:47.282693 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:03:47.282808 kubelet[2355]: E0123 01:03:47.282790 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-192-168\" not found" Jan 23 01:03:47.357096 systemd[1]: Created slice kubepods-burstable-pod01202e7a5f68f1ae349947ffcb78be21.slice - libcontainer container kubepods-burstable-pod01202e7a5f68f1ae349947ffcb78be21.slice. Jan 23 01:03:47.368498 kubelet[2355]: E0123 01:03:47.368454 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:47.371250 systemd[1]: Created slice kubepods-burstable-pod864849edeed870396a365eaece8b4065.slice - libcontainer container kubepods-burstable-pod864849edeed870396a365eaece8b4065.slice. Jan 23 01:03:47.382387 kubelet[2355]: I0123 01:03:47.382368 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-239-192-168" Jan 23 01:03:47.382707 kubelet[2355]: E0123 01:03:47.382687 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.192.168:6443/api/v1/nodes\": dial tcp 172.239.192.168:6443: connect: connection refused" node="172-239-192-168" Jan 23 01:03:47.383753 kubelet[2355]: E0123 01:03:47.383735 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:47.387985 systemd[1]: Created slice kubepods-burstable-podc105cedb86a2aea122a1016627c1f736.slice - libcontainer container kubepods-burstable-podc105cedb86a2aea122a1016627c1f736.slice. Jan 23 01:03:47.389823 kubelet[2355]: E0123 01:03:47.389667 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:47.415807 kubelet[2355]: I0123 01:03:47.414819 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-ca-certs\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:47.415907 kubelet[2355]: I0123 01:03:47.415887 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-flexvolume-dir\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:47.415982 kubelet[2355]: I0123 01:03:47.415911 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-kubeconfig\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:47.415982 kubelet[2355]: I0123 01:03:47.415940 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:47.415982 kubelet[2355]: I0123 01:03:47.415954 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01202e7a5f68f1ae349947ffcb78be21-kubeconfig\") pod \"kube-scheduler-172-239-192-168\" (UID: \"01202e7a5f68f1ae349947ffcb78be21\") " pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:47.415982 kubelet[2355]: I0123 01:03:47.415969 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-ca-certs\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:47.415982 kubelet[2355]: I0123 01:03:47.415981 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:47.416085 kubelet[2355]: I0123 01:03:47.415993 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-k8s-certs\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:47.416085 kubelet[2355]: I0123 01:03:47.416004 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-k8s-certs\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:47.416085 kubelet[2355]: E0123 01:03:47.414972 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.192.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-192-168?timeout=10s\": dial tcp 172.239.192.168:6443: connect: connection refused" interval="400ms" Jan 23 01:03:47.585112 kubelet[2355]: I0123 01:03:47.585074 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-239-192-168" Jan 23 01:03:47.585707 kubelet[2355]: E0123 01:03:47.585413 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.192.168:6443/api/v1/nodes\": dial tcp 172.239.192.168:6443: connect: connection refused" node="172-239-192-168" Jan 23 01:03:47.669216 kubelet[2355]: E0123 01:03:47.669105 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.669944 containerd[1557]: time="2026-01-23T01:03:47.669908483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-192-168,Uid:01202e7a5f68f1ae349947ffcb78be21,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:47.684159 kubelet[2355]: E0123 01:03:47.684123 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.684613 containerd[1557]: time="2026-01-23T01:03:47.684579566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-192-168,Uid:864849edeed870396a365eaece8b4065,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:47.690496 kubelet[2355]: E0123 01:03:47.690433 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.691742 containerd[1557]: time="2026-01-23T01:03:47.691709642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-192-168,Uid:c105cedb86a2aea122a1016627c1f736,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:47.691931 containerd[1557]: time="2026-01-23T01:03:47.691905922Z" level=info msg="connecting to shim f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd" address="unix:///run/containerd/s/f24270999e4b4ba0e52e699fe78a3742f465273b588bb4e01c27dd3cdb98fa36" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:47.721507 containerd[1557]: time="2026-01-23T01:03:47.721463227Z" level=info msg="connecting to shim 98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877" address="unix:///run/containerd/s/f85df28c7191c87103badd3e7d5d86393faf467086f8ab36f8e8398cd275e01f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:47.723138 containerd[1557]: time="2026-01-23T01:03:47.723114177Z" level=info msg="connecting to shim 37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3" address="unix:///run/containerd/s/a736a8cb4da174f1e67ba451dd4b752e029de80ceb6f694ced2a5715974ef556" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:47.748514 systemd[1]: Started cri-containerd-f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd.scope - libcontainer container f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd. Jan 23 01:03:47.771305 systemd[1]: Started cri-containerd-98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877.scope - libcontainer container 98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877. Jan 23 01:03:47.788386 systemd[1]: Started cri-containerd-37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3.scope - libcontainer container 37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3. Jan 23 01:03:47.822955 kubelet[2355]: E0123 01:03:47.819725 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.192.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-192-168?timeout=10s\": dial tcp 172.239.192.168:6443: connect: connection refused" interval="800ms" Jan 23 01:03:47.827422 containerd[1557]: time="2026-01-23T01:03:47.827359794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-192-168,Uid:01202e7a5f68f1ae349947ffcb78be21,Namespace:kube-system,Attempt:0,} returns sandbox id \"f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd\"" Jan 23 01:03:47.829442 kubelet[2355]: E0123 01:03:47.829414 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.850476 containerd[1557]: time="2026-01-23T01:03:47.849440873Z" level=info msg="CreateContainer within sandbox \"f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:03:47.902756 containerd[1557]: time="2026-01-23T01:03:47.902689497Z" level=info msg="Container d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:47.929092 containerd[1557]: time="2026-01-23T01:03:47.928681474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-192-168,Uid:c105cedb86a2aea122a1016627c1f736,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877\"" Jan 23 01:03:47.930009 containerd[1557]: time="2026-01-23T01:03:47.929987043Z" level=info msg="CreateContainer within sandbox \"f46a8d6e5378be1483c3159bf0873592e484e11e061da25e137b3fc30c8bfacd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b\"" Jan 23 01:03:47.930653 kubelet[2355]: E0123 01:03:47.930523 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.931367 containerd[1557]: time="2026-01-23T01:03:47.931244022Z" level=info msg="StartContainer for \"d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b\"" Jan 23 01:03:47.932456 containerd[1557]: time="2026-01-23T01:03:47.932435152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-192-168,Uid:864849edeed870396a365eaece8b4065,Namespace:kube-system,Attempt:0,} returns sandbox id \"37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3\"" Jan 23 01:03:47.932886 containerd[1557]: time="2026-01-23T01:03:47.932810082Z" level=info msg="connecting to shim d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b" address="unix:///run/containerd/s/f24270999e4b4ba0e52e699fe78a3742f465273b588bb4e01c27dd3cdb98fa36" protocol=ttrpc version=3 Jan 23 01:03:47.933211 kubelet[2355]: E0123 01:03:47.933196 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:47.934822 containerd[1557]: time="2026-01-23T01:03:47.934803771Z" level=info msg="CreateContainer within sandbox \"98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:03:47.937199 containerd[1557]: time="2026-01-23T01:03:47.937177650Z" level=info msg="CreateContainer within sandbox \"37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:03:47.943545 containerd[1557]: time="2026-01-23T01:03:47.943502746Z" level=info msg="Container f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:47.947603 containerd[1557]: time="2026-01-23T01:03:47.947563394Z" level=info msg="Container dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:47.951202 containerd[1557]: time="2026-01-23T01:03:47.951157633Z" level=info msg="CreateContainer within sandbox \"98b3797cad09305d179cbc183644e5fcdc5c356591f02a4c887a559ceec3e877\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada\"" Jan 23 01:03:47.951924 containerd[1557]: time="2026-01-23T01:03:47.951896362Z" level=info msg="StartContainer for \"f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada\"" Jan 23 01:03:47.955451 containerd[1557]: time="2026-01-23T01:03:47.954014271Z" level=info msg="connecting to shim f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada" address="unix:///run/containerd/s/f85df28c7191c87103badd3e7d5d86393faf467086f8ab36f8e8398cd275e01f" protocol=ttrpc version=3 Jan 23 01:03:47.955637 containerd[1557]: time="2026-01-23T01:03:47.954812131Z" level=info msg="CreateContainer within sandbox \"37f8cb760f4be9603a935779934e1ef22e2ff33aaa8bf944a0bdd005a4c2f4d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5\"" Jan 23 01:03:47.955993 containerd[1557]: time="2026-01-23T01:03:47.955969540Z" level=info msg="StartContainer for \"dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5\"" Jan 23 01:03:47.957664 containerd[1557]: time="2026-01-23T01:03:47.957316719Z" level=info msg="connecting to shim dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5" address="unix:///run/containerd/s/a736a8cb4da174f1e67ba451dd4b752e029de80ceb6f694ced2a5715974ef556" protocol=ttrpc version=3 Jan 23 01:03:47.958458 systemd[1]: Started cri-containerd-d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b.scope - libcontainer container d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b. Jan 23 01:03:47.987470 systemd[1]: Started cri-containerd-f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada.scope - libcontainer container f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada. Jan 23 01:03:47.991302 kubelet[2355]: I0123 01:03:47.990632 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-239-192-168" Jan 23 01:03:47.991302 kubelet[2355]: E0123 01:03:47.991004 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.192.168:6443/api/v1/nodes\": dial tcp 172.239.192.168:6443: connect: connection refused" node="172-239-192-168" Jan 23 01:03:48.006609 systemd[1]: Started cri-containerd-dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5.scope - libcontainer container dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5. Jan 23 01:03:48.066414 containerd[1557]: time="2026-01-23T01:03:48.066383345Z" level=info msg="StartContainer for \"d249166f5f5655ba70eae9ecb3e40e9c4491d2699cd0bfaf0e3533de0b60356b\" returns successfully" Jan 23 01:03:48.086688 kubelet[2355]: E0123 01:03:48.086615 2355 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.192.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.192.168:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:03:48.120656 containerd[1557]: time="2026-01-23T01:03:48.120586968Z" level=info msg="StartContainer for \"f2656933f6f8afad78990daaa1c72e317c3b39e313d442a24a6e463f18d4aada\" returns successfully" Jan 23 01:03:48.130725 containerd[1557]: time="2026-01-23T01:03:48.130682753Z" level=info msg="StartContainer for \"dacfadef7b90cbfd1bee4b3377d06fbf8cf96a5bc0ee23e752679053f57f9be5\" returns successfully" Jan 23 01:03:48.253896 kubelet[2355]: E0123 01:03:48.253866 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:48.254036 kubelet[2355]: E0123 01:03:48.254014 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:48.260696 kubelet[2355]: E0123 01:03:48.260670 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:48.260845 kubelet[2355]: E0123 01:03:48.260823 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:48.262357 kubelet[2355]: E0123 01:03:48.262334 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:48.262554 kubelet[2355]: E0123 01:03:48.262534 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:48.793229 kubelet[2355]: I0123 01:03:48.793201 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-239-192-168" Jan 23 01:03:49.263861 kubelet[2355]: E0123 01:03:49.263828 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:49.263969 kubelet[2355]: E0123 01:03:49.263946 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:49.264162 kubelet[2355]: E0123 01:03:49.264142 2355 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:49.264240 kubelet[2355]: E0123 01:03:49.264223 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:49.573817 kubelet[2355]: E0123 01:03:49.573691 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-192-168\" not found" node="172-239-192-168" Jan 23 01:03:49.738937 kubelet[2355]: I0123 01:03:49.738786 2355 kubelet_node_status.go:78] "Successfully registered node" node="172-239-192-168" Jan 23 01:03:49.738937 kubelet[2355]: E0123 01:03:49.738817 2355 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-239-192-168\": node \"172-239-192-168\" not found" Jan 23 01:03:49.815506 kubelet[2355]: I0123 01:03:49.815318 2355 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:49.830025 kubelet[2355]: E0123 01:03:49.829814 2355 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-192-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:49.830025 kubelet[2355]: I0123 01:03:49.829836 2355 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:49.832474 kubelet[2355]: E0123 01:03:49.832374 2355 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-192-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:49.832474 kubelet[2355]: I0123 01:03:49.832410 2355 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:49.840347 kubelet[2355]: E0123 01:03:49.840324 2355 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-192-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:50.201313 kubelet[2355]: I0123 01:03:50.201121 2355 apiserver.go:52] "Watching apiserver" Jan 23 01:03:50.214389 kubelet[2355]: I0123 01:03:50.214355 2355 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:03:50.262829 kubelet[2355]: I0123 01:03:50.262796 2355 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:50.263115 kubelet[2355]: I0123 01:03:50.263093 2355 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:50.264664 kubelet[2355]: E0123 01:03:50.264638 2355 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-192-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:50.264949 kubelet[2355]: E0123 01:03:50.264927 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:50.265777 kubelet[2355]: E0123 01:03:50.265737 2355 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-192-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:50.265904 kubelet[2355]: E0123 01:03:50.265892 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:51.394442 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... Jan 23 01:03:51.394460 systemd[1]: Reloading... Jan 23 01:03:51.515311 zram_generator::config[2694]: No configuration found. Jan 23 01:03:51.739664 systemd[1]: Reloading finished in 344 ms. Jan 23 01:03:51.768568 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:51.794535 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:03:51.794903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:51.794967 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 132.1M memory peak. Jan 23 01:03:51.796832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:03:51.979417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:03:51.990740 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:03:52.027295 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:03:52.028302 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:03:52.028302 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:03:52.028302 kubelet[2733]: I0123 01:03:52.027899 2733 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:03:52.033211 kubelet[2733]: I0123 01:03:52.033194 2733 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:03:52.033300 kubelet[2733]: I0123 01:03:52.033289 2733 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:03:52.033503 kubelet[2733]: I0123 01:03:52.033489 2733 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:03:52.034489 kubelet[2733]: I0123 01:03:52.034474 2733 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:03:52.036228 kubelet[2733]: I0123 01:03:52.036212 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:03:52.040145 kubelet[2733]: I0123 01:03:52.040084 2733 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:03:52.044529 kubelet[2733]: I0123 01:03:52.044514 2733 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:03:52.045067 kubelet[2733]: I0123 01:03:52.045030 2733 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:03:52.045241 kubelet[2733]: I0123 01:03:52.045118 2733 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-192-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:03:52.045381 kubelet[2733]: I0123 01:03:52.045368 2733 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:03:52.045430 kubelet[2733]: I0123 01:03:52.045422 2733 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:03:52.045510 kubelet[2733]: I0123 01:03:52.045501 2733 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:03:52.045734 kubelet[2733]: I0123 01:03:52.045723 2733 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:03:52.046307 kubelet[2733]: I0123 01:03:52.046176 2733 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:03:52.046307 kubelet[2733]: I0123 01:03:52.046207 2733 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:03:52.046307 kubelet[2733]: I0123 01:03:52.046222 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:03:52.048336 kubelet[2733]: I0123 01:03:52.047758 2733 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:03:52.048336 kubelet[2733]: I0123 01:03:52.048102 2733 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:03:52.052599 kubelet[2733]: I0123 01:03:52.052585 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:03:52.052685 kubelet[2733]: I0123 01:03:52.052675 2733 server.go:1289] "Started kubelet" Jan 23 01:03:52.054379 kubelet[2733]: I0123 01:03:52.054360 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:03:52.063157 kubelet[2733]: I0123 01:03:52.063130 2733 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:03:52.063945 kubelet[2733]: I0123 01:03:52.063932 2733 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:03:52.065038 kubelet[2733]: I0123 01:03:52.064985 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:03:52.065284 kubelet[2733]: I0123 01:03:52.065241 2733 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:03:52.066323 kubelet[2733]: I0123 01:03:52.066091 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:03:52.068461 kubelet[2733]: I0123 01:03:52.068447 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:03:52.070119 kubelet[2733]: I0123 01:03:52.070106 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:03:52.070602 kubelet[2733]: I0123 01:03:52.070481 2733 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:03:52.072334 kubelet[2733]: I0123 01:03:52.072258 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:03:52.073538 kubelet[2733]: I0123 01:03:52.073524 2733 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:03:52.073609 kubelet[2733]: I0123 01:03:52.073600 2733 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:03:52.073670 kubelet[2733]: I0123 01:03:52.073661 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:03:52.073713 kubelet[2733]: I0123 01:03:52.073705 2733 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:03:52.073801 kubelet[2733]: E0123 01:03:52.073786 2733 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:03:52.074833 kubelet[2733]: I0123 01:03:52.074804 2733 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:03:52.075046 kubelet[2733]: I0123 01:03:52.074888 2733 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:03:52.080332 kubelet[2733]: I0123 01:03:52.080161 2733 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:03:52.082508 kubelet[2733]: E0123 01:03:52.082490 2733 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:03:52.124028 kubelet[2733]: I0123 01:03:52.124001 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:03:52.124028 kubelet[2733]: I0123 01:03:52.124018 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:03:52.124028 kubelet[2733]: I0123 01:03:52.124036 2733 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:03:52.124182 kubelet[2733]: I0123 01:03:52.124164 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:03:52.124210 kubelet[2733]: I0123 01:03:52.124178 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:03:52.124210 kubelet[2733]: I0123 01:03:52.124196 2733 policy_none.go:49] "None policy: Start" Jan 23 01:03:52.124210 kubelet[2733]: I0123 01:03:52.124204 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:03:52.124303 kubelet[2733]: I0123 01:03:52.124215 2733 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:03:52.124326 kubelet[2733]: I0123 01:03:52.124314 2733 state_mem.go:75] "Updated machine memory state" Jan 23 01:03:52.129354 kubelet[2733]: E0123 01:03:52.129336 2733 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:03:52.130229 kubelet[2733]: I0123 01:03:52.130217 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:03:52.131220 kubelet[2733]: I0123 01:03:52.131192 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:03:52.131761 kubelet[2733]: I0123 01:03:52.131733 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:03:52.132306 kubelet[2733]: E0123 01:03:52.132135 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:03:52.174611 kubelet[2733]: I0123 01:03:52.174581 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:52.174716 kubelet[2733]: I0123 01:03:52.174702 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.175389 kubelet[2733]: I0123 01:03:52.175369 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:52.237661 kubelet[2733]: I0123 01:03:52.237477 2733 kubelet_node_status.go:75] "Attempting to register node" node="172-239-192-168" Jan 23 01:03:52.245709 kubelet[2733]: I0123 01:03:52.244871 2733 kubelet_node_status.go:124] "Node was previously registered" node="172-239-192-168" Jan 23 01:03:52.245709 kubelet[2733]: I0123 01:03:52.244931 2733 kubelet_node_status.go:78] "Successfully registered node" node="172-239-192-168" Jan 23 01:03:52.271223 kubelet[2733]: I0123 01:03:52.271189 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-ca-certs\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:52.271223 kubelet[2733]: I0123 01:03:52.271220 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:52.271223 kubelet[2733]: I0123 01:03:52.271239 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-ca-certs\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.271565 kubelet[2733]: I0123 01:03:52.271254 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-flexvolume-dir\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.271565 kubelet[2733]: I0123 01:03:52.271269 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01202e7a5f68f1ae349947ffcb78be21-kubeconfig\") pod \"kube-scheduler-172-239-192-168\" (UID: \"01202e7a5f68f1ae349947ffcb78be21\") " pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:52.271565 kubelet[2733]: I0123 01:03:52.271304 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864849edeed870396a365eaece8b4065-k8s-certs\") pod \"kube-apiserver-172-239-192-168\" (UID: \"864849edeed870396a365eaece8b4065\") " pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:52.271565 kubelet[2733]: I0123 01:03:52.271323 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-k8s-certs\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.271565 kubelet[2733]: I0123 01:03:52.271337 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-kubeconfig\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.271675 kubelet[2733]: I0123 01:03:52.271352 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c105cedb86a2aea122a1016627c1f736-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-192-168\" (UID: \"c105cedb86a2aea122a1016627c1f736\") " pod="kube-system/kube-controller-manager-172-239-192-168" Jan 23 01:03:52.479745 kubelet[2733]: E0123 01:03:52.479694 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:52.480382 kubelet[2733]: E0123 01:03:52.480332 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:52.482263 kubelet[2733]: E0123 01:03:52.481899 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:53.052371 kubelet[2733]: I0123 01:03:53.052309 2733 apiserver.go:52] "Watching apiserver" Jan 23 01:03:53.071602 kubelet[2733]: I0123 01:03:53.071328 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:03:53.108166 kubelet[2733]: E0123 01:03:53.108139 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:53.112608 kubelet[2733]: I0123 01:03:53.112582 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:53.116799 kubelet[2733]: I0123 01:03:53.116779 2733 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:53.125163 kubelet[2733]: E0123 01:03:53.125129 2733 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-192-168\" already exists" pod="kube-system/kube-apiserver-172-239-192-168" Jan 23 01:03:53.125762 kubelet[2733]: E0123 01:03:53.125661 2733 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-192-168\" already exists" pod="kube-system/kube-scheduler-172-239-192-168" Jan 23 01:03:53.125970 kubelet[2733]: E0123 01:03:53.125914 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:53.126460 kubelet[2733]: E0123 01:03:53.126446 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:53.136933 kubelet[2733]: I0123 01:03:53.136898 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-192-168" podStartSLOduration=1.136888879 podStartE2EDuration="1.136888879s" podCreationTimestamp="2026-01-23 01:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:53.136741189 +0000 UTC m=+1.140085651" watchObservedRunningTime="2026-01-23 01:03:53.136888879 +0000 UTC m=+1.140233341" Jan 23 01:03:53.143890 kubelet[2733]: I0123 01:03:53.143765 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-192-168" podStartSLOduration=1.143757616 podStartE2EDuration="1.143757616s" podCreationTimestamp="2026-01-23 01:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:53.143204696 +0000 UTC m=+1.146549158" watchObservedRunningTime="2026-01-23 01:03:53.143757616 +0000 UTC m=+1.147102078" Jan 23 01:03:53.166258 kubelet[2733]: I0123 01:03:53.166030 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-192-168" podStartSLOduration=1.166021884 podStartE2EDuration="1.166021884s" podCreationTimestamp="2026-01-23 01:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:53.15428879 +0000 UTC m=+1.157633252" watchObservedRunningTime="2026-01-23 01:03:53.166021884 +0000 UTC m=+1.169366346" Jan 23 01:03:54.109795 kubelet[2733]: E0123 01:03:54.109481 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:54.109795 kubelet[2733]: E0123 01:03:54.109520 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:55.113536 kubelet[2733]: E0123 01:03:55.113472 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:55.531829 kubelet[2733]: E0123 01:03:55.531801 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:55.730059 kubelet[2733]: E0123 01:03:55.730021 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:55.874204 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:03:56.238367 kubelet[2733]: E0123 01:03:56.238216 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:58.026202 kubelet[2733]: I0123 01:03:58.026158 2733 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:03:58.026704 kubelet[2733]: I0123 01:03:58.026691 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:03:58.026741 containerd[1557]: time="2026-01-23T01:03:58.026524144Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:03:59.069648 systemd[1]: Created slice kubepods-besteffort-pod9a1d8ccc_08d8_4015_bfc7_f29b798bad37.slice - libcontainer container kubepods-besteffort-pod9a1d8ccc_08d8_4015_bfc7_f29b798bad37.slice. Jan 23 01:03:59.114805 kubelet[2733]: I0123 01:03:59.114757 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a1d8ccc-08d8-4015-bfc7-f29b798bad37-kube-proxy\") pod \"kube-proxy-7ps5f\" (UID: \"9a1d8ccc-08d8-4015-bfc7-f29b798bad37\") " pod="kube-system/kube-proxy-7ps5f" Jan 23 01:03:59.114805 kubelet[2733]: I0123 01:03:59.114797 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a1d8ccc-08d8-4015-bfc7-f29b798bad37-xtables-lock\") pod \"kube-proxy-7ps5f\" (UID: \"9a1d8ccc-08d8-4015-bfc7-f29b798bad37\") " pod="kube-system/kube-proxy-7ps5f" Jan 23 01:03:59.115134 kubelet[2733]: I0123 01:03:59.114819 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a1d8ccc-08d8-4015-bfc7-f29b798bad37-lib-modules\") pod \"kube-proxy-7ps5f\" (UID: \"9a1d8ccc-08d8-4015-bfc7-f29b798bad37\") " pod="kube-system/kube-proxy-7ps5f" Jan 23 01:03:59.115134 kubelet[2733]: I0123 01:03:59.114840 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhh4\" (UniqueName: \"kubernetes.io/projected/9a1d8ccc-08d8-4015-bfc7-f29b798bad37-kube-api-access-6zhh4\") pod \"kube-proxy-7ps5f\" (UID: \"9a1d8ccc-08d8-4015-bfc7-f29b798bad37\") " pod="kube-system/kube-proxy-7ps5f" Jan 23 01:03:59.242199 systemd[1]: Created slice kubepods-besteffort-poddc41c044_5d8a_4e06_b988_674bc14a736d.slice - libcontainer container kubepods-besteffort-poddc41c044_5d8a_4e06_b988_674bc14a736d.slice. Jan 23 01:03:59.317850 kubelet[2733]: I0123 01:03:59.317819 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5vm9\" (UniqueName: \"kubernetes.io/projected/dc41c044-5d8a-4e06-b988-674bc14a736d-kube-api-access-b5vm9\") pod \"tigera-operator-7dcd859c48-2bscj\" (UID: \"dc41c044-5d8a-4e06-b988-674bc14a736d\") " pod="tigera-operator/tigera-operator-7dcd859c48-2bscj" Jan 23 01:03:59.317986 kubelet[2733]: I0123 01:03:59.317904 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc41c044-5d8a-4e06-b988-674bc14a736d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2bscj\" (UID: \"dc41c044-5d8a-4e06-b988-674bc14a736d\") " pod="tigera-operator/tigera-operator-7dcd859c48-2bscj" Jan 23 01:03:59.378422 kubelet[2733]: E0123 01:03:59.377954 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:59.379293 containerd[1557]: time="2026-01-23T01:03:59.379102166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ps5f,Uid:9a1d8ccc-08d8-4015-bfc7-f29b798bad37,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:59.394772 containerd[1557]: time="2026-01-23T01:03:59.394713114Z" level=info msg="connecting to shim 6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677" address="unix:///run/containerd/s/14b3929726307af77f78eeebbaadefe27c292adaeafbe6d1852359988435d156" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:59.421395 systemd[1]: Started cri-containerd-6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677.scope - libcontainer container 6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677. Jan 23 01:03:59.450950 containerd[1557]: time="2026-01-23T01:03:59.450920094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ps5f,Uid:9a1d8ccc-08d8-4015-bfc7-f29b798bad37,Namespace:kube-system,Attempt:0,} returns sandbox id \"6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677\"" Jan 23 01:03:59.451855 kubelet[2733]: E0123 01:03:59.451821 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:03:59.456583 containerd[1557]: time="2026-01-23T01:03:59.456543093Z" level=info msg="CreateContainer within sandbox \"6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:03:59.467179 containerd[1557]: time="2026-01-23T01:03:59.464753268Z" level=info msg="Container f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:59.469160 containerd[1557]: time="2026-01-23T01:03:59.469133659Z" level=info msg="CreateContainer within sandbox \"6caf0f457a91dfb2836c510bf1e0a3dd6f3538aa8d4027447d0d71e4e8812677\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc\"" Jan 23 01:03:59.469650 containerd[1557]: time="2026-01-23T01:03:59.469620232Z" level=info msg="StartContainer for \"f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc\"" Jan 23 01:03:59.470748 containerd[1557]: time="2026-01-23T01:03:59.470712026Z" level=info msg="connecting to shim f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc" address="unix:///run/containerd/s/14b3929726307af77f78eeebbaadefe27c292adaeafbe6d1852359988435d156" protocol=ttrpc version=3 Jan 23 01:03:59.489407 systemd[1]: Started cri-containerd-f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc.scope - libcontainer container f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc. Jan 23 01:03:59.545807 containerd[1557]: time="2026-01-23T01:03:59.545756136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2bscj,Uid:dc41c044-5d8a-4e06-b988-674bc14a736d,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:03:59.559081 containerd[1557]: time="2026-01-23T01:03:59.559043242Z" level=info msg="connecting to shim 37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b" address="unix:///run/containerd/s/38b58ea7ca17e1b5984fb406a0a8bea579ac9a5aeee65e833deef9ecb9f8b9c9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:59.593097 containerd[1557]: time="2026-01-23T01:03:59.592025442Z" level=info msg="StartContainer for \"f808db506b80ad0137da13f8c2fa8a52710c6feaf308a4ce0bd51c222d3debbc\" returns successfully" Jan 23 01:03:59.594441 systemd[1]: Started cri-containerd-37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b.scope - libcontainer container 37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b. Jan 23 01:03:59.652845 containerd[1557]: time="2026-01-23T01:03:59.652445889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2bscj,Uid:dc41c044-5d8a-4e06-b988-674bc14a736d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b\"" Jan 23 01:03:59.655797 containerd[1557]: time="2026-01-23T01:03:59.655753567Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:04:00.123670 kubelet[2733]: E0123 01:04:00.123627 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:00.643777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787144409.mount: Deactivated successfully. Jan 23 01:04:01.185831 containerd[1557]: time="2026-01-23T01:04:01.185213732Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:01.185831 containerd[1557]: time="2026-01-23T01:04:01.185804568Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:04:01.186369 containerd[1557]: time="2026-01-23T01:04:01.186347818Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:01.187742 containerd[1557]: time="2026-01-23T01:04:01.187723424Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:01.188390 containerd[1557]: time="2026-01-23T01:04:01.188361102Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.532465584s" Jan 23 01:04:01.188438 containerd[1557]: time="2026-01-23T01:04:01.188392165Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:04:01.192296 containerd[1557]: time="2026-01-23T01:04:01.192248979Z" level=info msg="CreateContainer within sandbox \"37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:04:01.200582 containerd[1557]: time="2026-01-23T01:04:01.198678191Z" level=info msg="Container 4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:01.206179 containerd[1557]: time="2026-01-23T01:04:01.206156413Z" level=info msg="CreateContainer within sandbox \"37f49a42ffb82e4b02a2377f0ddf0cc73765fd177d8ddaf638032ba7e3b0da7b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0\"" Jan 23 01:04:01.206753 containerd[1557]: time="2026-01-23T01:04:01.206701244Z" level=info msg="StartContainer for \"4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0\"" Jan 23 01:04:01.207524 containerd[1557]: time="2026-01-23T01:04:01.207488054Z" level=info msg="connecting to shim 4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0" address="unix:///run/containerd/s/38b58ea7ca17e1b5984fb406a0a8bea579ac9a5aeee65e833deef9ecb9f8b9c9" protocol=ttrpc version=3 Jan 23 01:04:01.229415 systemd[1]: Started cri-containerd-4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0.scope - libcontainer container 4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0. Jan 23 01:04:01.266568 containerd[1557]: time="2026-01-23T01:04:01.266528545Z" level=info msg="StartContainer for \"4b5fba601a1855841c8848b681674bf6bacebb02a447a51b886731bebc3598a0\" returns successfully" Jan 23 01:04:02.138643 kubelet[2733]: I0123 01:04:02.138586 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ps5f" podStartSLOduration=3.138568399 podStartE2EDuration="3.138568399s" podCreationTimestamp="2026-01-23 01:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:04:00.136659888 +0000 UTC m=+8.140004350" watchObservedRunningTime="2026-01-23 01:04:02.138568399 +0000 UTC m=+10.141912861" Jan 23 01:04:05.537066 kubelet[2733]: E0123 01:04:05.537011 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:05.555518 kubelet[2733]: I0123 01:04:05.555428 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2bscj" podStartSLOduration=5.02122072 podStartE2EDuration="6.555408213s" podCreationTimestamp="2026-01-23 01:03:59 +0000 UTC" firstStartedPulling="2026-01-23 01:03:59.655207709 +0000 UTC m=+7.658552171" lastFinishedPulling="2026-01-23 01:04:01.189395202 +0000 UTC m=+9.192739664" observedRunningTime="2026-01-23 01:04:02.13886313 +0000 UTC m=+10.142207592" watchObservedRunningTime="2026-01-23 01:04:05.555408213 +0000 UTC m=+13.558752675" Jan 23 01:04:05.737191 kubelet[2733]: E0123 01:04:05.737106 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:06.246135 kubelet[2733]: E0123 01:04:06.245815 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:07.065646 sudo[1799]: pam_unix(sudo:session): session closed for user root Jan 23 01:04:07.089366 sshd[1798]: Connection closed by 68.220.241.50 port 55404 Jan 23 01:04:07.091327 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 23 01:04:07.099105 systemd[1]: sshd@6-172.239.192.168:22-68.220.241.50:55404.service: Deactivated successfully. Jan 23 01:04:07.103709 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:04:07.103938 systemd[1]: session-7.scope: Consumed 4.509s CPU time, 227.3M memory peak. Jan 23 01:04:07.106424 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:04:07.109046 systemd-logind[1535]: Removed session 7. Jan 23 01:04:07.140331 kubelet[2733]: E0123 01:04:07.139846 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:10.424189 update_engine[1541]: I20260123 01:04:10.423331 1541 update_attempter.cc:509] Updating boot flags... Jan 23 01:04:11.658827 systemd[1]: Created slice kubepods-besteffort-pod8fd8b1f3_b7cf_472c_980d_c767f9858928.slice - libcontainer container kubepods-besteffort-pod8fd8b1f3_b7cf_472c_980d_c767f9858928.slice. Jan 23 01:04:11.702225 kubelet[2733]: I0123 01:04:11.702194 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcht\" (UniqueName: \"kubernetes.io/projected/8fd8b1f3-b7cf-472c-980d-c767f9858928-kube-api-access-2mcht\") pod \"calico-typha-85cddf7bbb-wn68x\" (UID: \"8fd8b1f3-b7cf-472c-980d-c767f9858928\") " pod="calico-system/calico-typha-85cddf7bbb-wn68x" Jan 23 01:04:11.704046 kubelet[2733]: I0123 01:04:11.703996 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8fd8b1f3-b7cf-472c-980d-c767f9858928-typha-certs\") pod \"calico-typha-85cddf7bbb-wn68x\" (UID: \"8fd8b1f3-b7cf-472c-980d-c767f9858928\") " pod="calico-system/calico-typha-85cddf7bbb-wn68x" Jan 23 01:04:11.704142 kubelet[2733]: I0123 01:04:11.704022 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fd8b1f3-b7cf-472c-980d-c767f9858928-tigera-ca-bundle\") pod \"calico-typha-85cddf7bbb-wn68x\" (UID: \"8fd8b1f3-b7cf-472c-980d-c767f9858928\") " pod="calico-system/calico-typha-85cddf7bbb-wn68x" Jan 23 01:04:11.826783 systemd[1]: Created slice kubepods-besteffort-pod56c02ac9_e87e_4a45_8a42_f5a43fc4e181.slice - libcontainer container kubepods-besteffort-pod56c02ac9_e87e_4a45_8a42_f5a43fc4e181.slice. Jan 23 01:04:11.904666 kubelet[2733]: I0123 01:04:11.904622 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-cni-net-dir\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904666 kubelet[2733]: I0123 01:04:11.904666 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-var-lib-calico\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904666 kubelet[2733]: I0123 01:04:11.904683 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-var-run-calico\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904828 kubelet[2733]: I0123 01:04:11.904700 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbk2\" (UniqueName: \"kubernetes.io/projected/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-kube-api-access-dqbk2\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904828 kubelet[2733]: I0123 01:04:11.904719 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-lib-modules\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904828 kubelet[2733]: I0123 01:04:11.904735 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-tigera-ca-bundle\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904828 kubelet[2733]: I0123 01:04:11.904752 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-node-certs\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904828 kubelet[2733]: I0123 01:04:11.904766 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-xtables-lock\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904952 kubelet[2733]: I0123 01:04:11.904780 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-flexvol-driver-host\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904952 kubelet[2733]: I0123 01:04:11.904795 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-cni-log-dir\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904952 kubelet[2733]: I0123 01:04:11.904809 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-policysync\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.904952 kubelet[2733]: I0123 01:04:11.904824 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/56c02ac9-e87e-4a45-8a42-f5a43fc4e181-cni-bin-dir\") pod \"calico-node-k6nwj\" (UID: \"56c02ac9-e87e-4a45-8a42-f5a43fc4e181\") " pod="calico-system/calico-node-k6nwj" Jan 23 01:04:11.967972 kubelet[2733]: E0123 01:04:11.967871 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:11.969004 containerd[1557]: time="2026-01-23T01:04:11.968959128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85cddf7bbb-wn68x,Uid:8fd8b1f3-b7cf-472c-980d-c767f9858928,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:11.980158 kubelet[2733]: E0123 01:04:11.979556 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:12.006157 containerd[1557]: time="2026-01-23T01:04:11.996911672Z" level=info msg="connecting to shim 9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935" address="unix:///run/containerd/s/f3fbd03a53f95379c0cf55ef84639faf2712a2634ad0ee02f3b2a010703b51ca" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:12.011384 kubelet[2733]: I0123 01:04:12.010804 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16cb6344-7ecd-43dd-aa88-18d498591102-socket-dir\") pod \"csi-node-driver-9vbcl\" (UID: \"16cb6344-7ecd-43dd-aa88-18d498591102\") " pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:12.011530 kubelet[2733]: I0123 01:04:12.011493 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16cb6344-7ecd-43dd-aa88-18d498591102-varrun\") pod \"csi-node-driver-9vbcl\" (UID: \"16cb6344-7ecd-43dd-aa88-18d498591102\") " pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:12.011970 kubelet[2733]: I0123 01:04:12.011954 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kph75\" (UniqueName: \"kubernetes.io/projected/16cb6344-7ecd-43dd-aa88-18d498591102-kube-api-access-kph75\") pod \"csi-node-driver-9vbcl\" (UID: \"16cb6344-7ecd-43dd-aa88-18d498591102\") " pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:12.012063 kubelet[2733]: I0123 01:04:12.012051 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16cb6344-7ecd-43dd-aa88-18d498591102-kubelet-dir\") pod \"csi-node-driver-9vbcl\" (UID: \"16cb6344-7ecd-43dd-aa88-18d498591102\") " pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:12.012226 kubelet[2733]: I0123 01:04:12.012133 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16cb6344-7ecd-43dd-aa88-18d498591102-registration-dir\") pod \"csi-node-driver-9vbcl\" (UID: \"16cb6344-7ecd-43dd-aa88-18d498591102\") " pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:12.020081 kubelet[2733]: E0123 01:04:12.019944 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.020491 kubelet[2733]: W0123 01:04:12.020475 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.020882 kubelet[2733]: E0123 01:04:12.020575 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.021791 kubelet[2733]: E0123 01:04:12.021477 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.021870 kubelet[2733]: W0123 01:04:12.021858 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.022537 kubelet[2733]: E0123 01:04:12.021911 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.023478 kubelet[2733]: E0123 01:04:12.023376 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.023704 kubelet[2733]: W0123 01:04:12.023587 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.024213 kubelet[2733]: E0123 01:04:12.024199 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.027136 kubelet[2733]: E0123 01:04:12.027106 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.027209 kubelet[2733]: W0123 01:04:12.027184 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.027256 kubelet[2733]: E0123 01:04:12.027246 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.027515 systemd[1]: Started cri-containerd-9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935.scope - libcontainer container 9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935. Jan 23 01:04:12.031480 kubelet[2733]: E0123 01:04:12.031468 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.031551 kubelet[2733]: W0123 01:04:12.031539 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.031598 kubelet[2733]: E0123 01:04:12.031588 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.031816 kubelet[2733]: E0123 01:04:12.031805 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.031872 kubelet[2733]: W0123 01:04:12.031862 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.031919 kubelet[2733]: E0123 01:04:12.031910 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.032371 kubelet[2733]: E0123 01:04:12.032259 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.032558 kubelet[2733]: W0123 01:04:12.032511 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.032776 kubelet[2733]: E0123 01:04:12.032682 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.033620 kubelet[2733]: E0123 01:04:12.033608 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.033777 kubelet[2733]: W0123 01:04:12.033766 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.033889 kubelet[2733]: E0123 01:04:12.033817 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.034825 kubelet[2733]: E0123 01:04:12.034799 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.034825 kubelet[2733]: W0123 01:04:12.034823 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.034949 kubelet[2733]: E0123 01:04:12.034843 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.035598 kubelet[2733]: E0123 01:04:12.035573 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.036408 kubelet[2733]: W0123 01:04:12.035882 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.036408 kubelet[2733]: E0123 01:04:12.035898 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.036699 kubelet[2733]: E0123 01:04:12.036677 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.036909 kubelet[2733]: W0123 01:04:12.036883 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.036909 kubelet[2733]: E0123 01:04:12.036904 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.037498 kubelet[2733]: E0123 01:04:12.037476 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.037498 kubelet[2733]: W0123 01:04:12.037493 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.037577 kubelet[2733]: E0123 01:04:12.037502 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.038009 kubelet[2733]: E0123 01:04:12.037967 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.038009 kubelet[2733]: W0123 01:04:12.038009 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.038079 kubelet[2733]: E0123 01:04:12.038020 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.038596 kubelet[2733]: E0123 01:04:12.038267 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.038596 kubelet[2733]: W0123 01:04:12.038303 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.038596 kubelet[2733]: E0123 01:04:12.038311 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.038676 kubelet[2733]: E0123 01:04:12.038636 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.038676 kubelet[2733]: W0123 01:04:12.038644 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.038676 kubelet[2733]: E0123 01:04:12.038652 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.039366 kubelet[2733]: E0123 01:04:12.038912 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.039366 kubelet[2733]: W0123 01:04:12.038928 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.039366 kubelet[2733]: E0123 01:04:12.038936 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.039366 kubelet[2733]: E0123 01:04:12.039217 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.039366 kubelet[2733]: W0123 01:04:12.039225 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.039366 kubelet[2733]: E0123 01:04:12.039233 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.039606 kubelet[2733]: E0123 01:04:12.039573 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.039606 kubelet[2733]: W0123 01:04:12.039589 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.039663 kubelet[2733]: E0123 01:04:12.039617 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.039878 kubelet[2733]: E0123 01:04:12.039843 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.039878 kubelet[2733]: W0123 01:04:12.039877 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.039944 kubelet[2733]: E0123 01:04:12.039886 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.046075 kubelet[2733]: E0123 01:04:12.046062 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.049167 kubelet[2733]: W0123 01:04:12.048443 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.049287 kubelet[2733]: E0123 01:04:12.049254 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.105968 containerd[1557]: time="2026-01-23T01:04:12.105832546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85cddf7bbb-wn68x,Uid:8fd8b1f3-b7cf-472c-980d-c767f9858928,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935\"" Jan 23 01:04:12.106952 kubelet[2733]: E0123 01:04:12.106582 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:12.108295 containerd[1557]: time="2026-01-23T01:04:12.108254585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:04:12.113727 kubelet[2733]: E0123 01:04:12.113461 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.113727 kubelet[2733]: W0123 01:04:12.113499 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.113727 kubelet[2733]: E0123 01:04:12.113516 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.113824 kubelet[2733]: E0123 01:04:12.113802 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.113855 kubelet[2733]: W0123 01:04:12.113811 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.113855 kubelet[2733]: E0123 01:04:12.113839 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.114314 kubelet[2733]: E0123 01:04:12.114050 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.114314 kubelet[2733]: W0123 01:04:12.114058 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.114314 kubelet[2733]: E0123 01:04:12.114066 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.114314 kubelet[2733]: E0123 01:04:12.114241 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.114314 kubelet[2733]: W0123 01:04:12.114249 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.114314 kubelet[2733]: E0123 01:04:12.114257 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.114749 kubelet[2733]: E0123 01:04:12.114518 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.114749 kubelet[2733]: W0123 01:04:12.114531 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.114749 kubelet[2733]: E0123 01:04:12.114539 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.115202 kubelet[2733]: E0123 01:04:12.115109 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.115202 kubelet[2733]: W0123 01:04:12.115123 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.115202 kubelet[2733]: E0123 01:04:12.115143 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.115756 kubelet[2733]: E0123 01:04:12.115607 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.115756 kubelet[2733]: W0123 01:04:12.115622 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.115756 kubelet[2733]: E0123 01:04:12.115631 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.116013 kubelet[2733]: E0123 01:04:12.115930 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.116013 kubelet[2733]: W0123 01:04:12.115942 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.116013 kubelet[2733]: E0123 01:04:12.115952 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.116598 kubelet[2733]: E0123 01:04:12.116138 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.116598 kubelet[2733]: W0123 01:04:12.116152 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.116598 kubelet[2733]: E0123 01:04:12.116160 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.117977 kubelet[2733]: E0123 01:04:12.117550 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.117977 kubelet[2733]: W0123 01:04:12.117573 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.117977 kubelet[2733]: E0123 01:04:12.117598 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.118607 kubelet[2733]: E0123 01:04:12.118506 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.118770 kubelet[2733]: W0123 01:04:12.118755 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.118951 kubelet[2733]: E0123 01:04:12.118911 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.119757 kubelet[2733]: E0123 01:04:12.119553 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.120096 kubelet[2733]: W0123 01:04:12.119995 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.120343 kubelet[2733]: E0123 01:04:12.120245 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.121253 kubelet[2733]: E0123 01:04:12.120795 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.121253 kubelet[2733]: W0123 01:04:12.120807 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.121253 kubelet[2733]: E0123 01:04:12.120817 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.121848 kubelet[2733]: E0123 01:04:12.121751 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.122086 kubelet[2733]: W0123 01:04:12.122072 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.122289 kubelet[2733]: E0123 01:04:12.122227 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.123139 kubelet[2733]: E0123 01:04:12.122745 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.123139 kubelet[2733]: W0123 01:04:12.122756 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.123139 kubelet[2733]: E0123 01:04:12.122765 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.123479 kubelet[2733]: E0123 01:04:12.123450 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.123567 kubelet[2733]: W0123 01:04:12.123555 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.123633 kubelet[2733]: E0123 01:04:12.123604 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.123930 kubelet[2733]: E0123 01:04:12.123919 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.124010 kubelet[2733]: W0123 01:04:12.123999 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.124087 kubelet[2733]: E0123 01:04:12.124076 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.124502 kubelet[2733]: E0123 01:04:12.124417 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.124502 kubelet[2733]: W0123 01:04:12.124428 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.124502 kubelet[2733]: E0123 01:04:12.124436 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.124781 kubelet[2733]: E0123 01:04:12.124769 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.124830 kubelet[2733]: W0123 01:04:12.124821 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.124883 kubelet[2733]: E0123 01:04:12.124873 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.125197 kubelet[2733]: E0123 01:04:12.125139 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.125197 kubelet[2733]: W0123 01:04:12.125150 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.125197 kubelet[2733]: E0123 01:04:12.125158 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.126044 kubelet[2733]: E0123 01:04:12.126031 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.126105 kubelet[2733]: W0123 01:04:12.126095 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.126159 kubelet[2733]: E0123 01:04:12.126148 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.126476 kubelet[2733]: E0123 01:04:12.126464 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.126526 kubelet[2733]: W0123 01:04:12.126517 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.126566 kubelet[2733]: E0123 01:04:12.126558 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.126948 kubelet[2733]: E0123 01:04:12.126829 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.126948 kubelet[2733]: W0123 01:04:12.126839 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.126948 kubelet[2733]: E0123 01:04:12.126850 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.127087 kubelet[2733]: E0123 01:04:12.127076 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.127141 kubelet[2733]: W0123 01:04:12.127130 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.127186 kubelet[2733]: E0123 01:04:12.127177 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.127475 kubelet[2733]: E0123 01:04:12.127463 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.127525 kubelet[2733]: W0123 01:04:12.127516 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.127579 kubelet[2733]: E0123 01:04:12.127568 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.136229 kubelet[2733]: E0123 01:04:12.136189 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:04:12.136229 kubelet[2733]: W0123 01:04:12.136210 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:04:12.136229 kubelet[2733]: E0123 01:04:12.136223 2733 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:04:12.138879 kubelet[2733]: E0123 01:04:12.138633 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:12.139010 containerd[1557]: time="2026-01-23T01:04:12.138970622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6nwj,Uid:56c02ac9-e87e-4a45-8a42-f5a43fc4e181,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:12.161611 containerd[1557]: time="2026-01-23T01:04:12.161556325Z" level=info msg="connecting to shim f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9" address="unix:///run/containerd/s/2664312caf58b72f8a99ecf72a81ea7a6d347d8a70aeaaedfdf156d39a16d6fe" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:12.189534 systemd[1]: Started cri-containerd-f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9.scope - libcontainer container f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9. Jan 23 01:04:12.216635 containerd[1557]: time="2026-01-23T01:04:12.216552677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k6nwj,Uid:56c02ac9-e87e-4a45-8a42-f5a43fc4e181,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\"" Jan 23 01:04:12.218041 kubelet[2733]: E0123 01:04:12.217981 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:13.367422 containerd[1557]: time="2026-01-23T01:04:13.367373405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:13.368313 containerd[1557]: time="2026-01-23T01:04:13.368122161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:04:13.368790 containerd[1557]: time="2026-01-23T01:04:13.368766103Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:13.370363 containerd[1557]: time="2026-01-23T01:04:13.370302527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:13.370872 containerd[1557]: time="2026-01-23T01:04:13.370845236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.262538809s" Jan 23 01:04:13.370905 containerd[1557]: time="2026-01-23T01:04:13.370873737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:04:13.374406 containerd[1557]: time="2026-01-23T01:04:13.374385400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:04:13.387489 containerd[1557]: time="2026-01-23T01:04:13.387464277Z" level=info msg="CreateContainer within sandbox \"9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:04:13.395259 containerd[1557]: time="2026-01-23T01:04:13.395211958Z" level=info msg="Container 9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:13.398220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845234426.mount: Deactivated successfully. Jan 23 01:04:13.403308 containerd[1557]: time="2026-01-23T01:04:13.403266740Z" level=info msg="CreateContainer within sandbox \"9d1facfd62d55ef576c876218a9a332166a37ef4d17affaa9e92329ee7752935\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59\"" Jan 23 01:04:13.403838 containerd[1557]: time="2026-01-23T01:04:13.403816939Z" level=info msg="StartContainer for \"9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59\"" Jan 23 01:04:13.404963 containerd[1557]: time="2026-01-23T01:04:13.404891626Z" level=info msg="connecting to shim 9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59" address="unix:///run/containerd/s/f3fbd03a53f95379c0cf55ef84639faf2712a2634ad0ee02f3b2a010703b51ca" protocol=ttrpc version=3 Jan 23 01:04:13.424430 systemd[1]: Started cri-containerd-9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59.scope - libcontainer container 9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59. Jan 23 01:04:13.482876 containerd[1557]: time="2026-01-23T01:04:13.482820630Z" level=info msg="StartContainer for \"9b359349542a19df85686ee68cf0dc6ac986ab894e33feab3b440a7c0fc13b59\" returns successfully" Jan 23 01:04:14.015416 containerd[1557]: time="2026-01-23T01:04:14.015372961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:14.016045 containerd[1557]: time="2026-01-23T01:04:14.016018242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:04:14.016596 containerd[1557]: time="2026-01-23T01:04:14.016554080Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:14.018016 containerd[1557]: time="2026-01-23T01:04:14.017967406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:14.018584 containerd[1557]: time="2026-01-23T01:04:14.018455692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 644.045162ms" Jan 23 01:04:14.018584 containerd[1557]: time="2026-01-23T01:04:14.018482743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:04:14.021728 containerd[1557]: time="2026-01-23T01:04:14.021705279Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:04:14.029478 containerd[1557]: time="2026-01-23T01:04:14.029453552Z" level=info msg="Container f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:14.033633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889050416.mount: Deactivated successfully. Jan 23 01:04:14.039701 containerd[1557]: time="2026-01-23T01:04:14.039661657Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7\"" Jan 23 01:04:14.040081 containerd[1557]: time="2026-01-23T01:04:14.040062289Z" level=info msg="StartContainer for \"f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7\"" Jan 23 01:04:14.042445 containerd[1557]: time="2026-01-23T01:04:14.042422837Z" level=info msg="connecting to shim f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7" address="unix:///run/containerd/s/2664312caf58b72f8a99ecf72a81ea7a6d347d8a70aeaaedfdf156d39a16d6fe" protocol=ttrpc version=3 Jan 23 01:04:14.064392 systemd[1]: Started cri-containerd-f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7.scope - libcontainer container f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7. Jan 23 01:04:14.074741 kubelet[2733]: E0123 01:04:14.074455 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:14.143471 containerd[1557]: time="2026-01-23T01:04:14.143392552Z" level=info msg="StartContainer for \"f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7\" returns successfully" Jan 23 01:04:14.161626 kubelet[2733]: E0123 01:04:14.161569 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:14.164754 kubelet[2733]: E0123 01:04:14.164737 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:14.171985 kubelet[2733]: I0123 01:04:14.171871 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85cddf7bbb-wn68x" podStartSLOduration=1.9081240130000001 podStartE2EDuration="3.171857084s" podCreationTimestamp="2026-01-23 01:04:11 +0000 UTC" firstStartedPulling="2026-01-23 01:04:12.10782576 +0000 UTC m=+20.111170222" lastFinishedPulling="2026-01-23 01:04:13.371558831 +0000 UTC m=+21.374903293" observedRunningTime="2026-01-23 01:04:14.171611757 +0000 UTC m=+22.174956219" watchObservedRunningTime="2026-01-23 01:04:14.171857084 +0000 UTC m=+22.175201546" Jan 23 01:04:14.178916 systemd[1]: cri-containerd-f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7.scope: Deactivated successfully. Jan 23 01:04:14.182127 containerd[1557]: time="2026-01-23T01:04:14.182040267Z" level=info msg="received container exit event container_id:\"f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7\" id:\"f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7\" pid:3366 exited_at:{seconds:1769130254 nanos:181596833}" Jan 23 01:04:14.222873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f98e1afce4b42a667c4d05b67673c05b9efa4e5852479740bba399d47345ece7-rootfs.mount: Deactivated successfully. Jan 23 01:04:15.170093 kubelet[2733]: E0123 01:04:15.170029 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:15.171941 kubelet[2733]: I0123 01:04:15.171329 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:04:15.171941 kubelet[2733]: E0123 01:04:15.171802 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:15.174970 containerd[1557]: time="2026-01-23T01:04:15.174924620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:04:16.074634 kubelet[2733]: E0123 01:04:16.074563 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:17.294494 containerd[1557]: time="2026-01-23T01:04:17.294415649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:17.295547 containerd[1557]: time="2026-01-23T01:04:17.295285121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:04:17.296506 containerd[1557]: time="2026-01-23T01:04:17.296441413Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:17.299324 containerd[1557]: time="2026-01-23T01:04:17.298977072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:17.300402 containerd[1557]: time="2026-01-23T01:04:17.300369719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.125370286s" Jan 23 01:04:17.300474 containerd[1557]: time="2026-01-23T01:04:17.300406910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:04:17.304748 containerd[1557]: time="2026-01-23T01:04:17.304531061Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:04:17.314310 containerd[1557]: time="2026-01-23T01:04:17.313573233Z" level=info msg="Container 064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:17.319106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458608763.mount: Deactivated successfully. Jan 23 01:04:17.325502 containerd[1557]: time="2026-01-23T01:04:17.325471293Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044\"" Jan 23 01:04:17.327392 containerd[1557]: time="2026-01-23T01:04:17.327337773Z" level=info msg="StartContainer for \"064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044\"" Jan 23 01:04:17.329144 containerd[1557]: time="2026-01-23T01:04:17.329120681Z" level=info msg="connecting to shim 064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044" address="unix:///run/containerd/s/2664312caf58b72f8a99ecf72a81ea7a6d347d8a70aeaaedfdf156d39a16d6fe" protocol=ttrpc version=3 Jan 23 01:04:17.357447 systemd[1]: Started cri-containerd-064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044.scope - libcontainer container 064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044. Jan 23 01:04:17.435168 containerd[1557]: time="2026-01-23T01:04:17.435102171Z" level=info msg="StartContainer for \"064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044\" returns successfully" Jan 23 01:04:17.935419 containerd[1557]: time="2026-01-23T01:04:17.935378232Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:04:17.937996 systemd[1]: cri-containerd-064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044.scope: Deactivated successfully. Jan 23 01:04:17.938326 systemd[1]: cri-containerd-064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044.scope: Consumed 548ms CPU time, 195.4M memory peak, 171.3M written to disk. Jan 23 01:04:17.939959 containerd[1557]: time="2026-01-23T01:04:17.939922615Z" level=info msg="received container exit event container_id:\"064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044\" id:\"064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044\" pid:3426 exited_at:{seconds:1769130257 nanos:939494403}" Jan 23 01:04:17.962000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-064771d466fcd521fc78dc62744ce477db60b541d10e3af7f5ff7f127a017044-rootfs.mount: Deactivated successfully. Jan 23 01:04:18.009599 kubelet[2733]: I0123 01:04:18.009567 2733 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:04:18.039819 systemd[1]: Created slice kubepods-burstable-podb6a31a37_0b51_4596_b51a_38cbea1a13d6.slice - libcontainer container kubepods-burstable-podb6a31a37_0b51_4596_b51a_38cbea1a13d6.slice. Jan 23 01:04:18.066420 systemd[1]: Created slice kubepods-besteffort-pode8fdc5e3_83f6_414b_bda4_0c1884c70d80.slice - libcontainer container kubepods-besteffort-pode8fdc5e3_83f6_414b_bda4_0c1884c70d80.slice. Jan 23 01:04:18.075693 systemd[1]: Created slice kubepods-besteffort-pod864d071c_38d9_4c87_9ba2_e5d2783e5cdc.slice - libcontainer container kubepods-besteffort-pod864d071c_38d9_4c87_9ba2_e5d2783e5cdc.slice. Jan 23 01:04:18.086738 systemd[1]: Created slice kubepods-besteffort-pod39650212_2ffc_42da_8b29_3a9e9efdade1.slice - libcontainer container kubepods-besteffort-pod39650212_2ffc_42da_8b29_3a9e9efdade1.slice. Jan 23 01:04:18.094473 systemd[1]: Created slice kubepods-burstable-pod9961df16_6144_417e_bebd_56649ddba7b2.slice - libcontainer container kubepods-burstable-pod9961df16_6144_417e_bebd_56649ddba7b2.slice. Jan 23 01:04:18.103416 systemd[1]: Created slice kubepods-besteffort-pode68af0d7_5f9f_4004_bfb5_105e45ad7f04.slice - libcontainer container kubepods-besteffort-pode68af0d7_5f9f_4004_bfb5_105e45ad7f04.slice. Jan 23 01:04:18.113597 systemd[1]: Created slice kubepods-besteffort-pod4208ebfd_8f2b_4fdd_9422_1692e411c7bc.slice - libcontainer container kubepods-besteffort-pod4208ebfd_8f2b_4fdd_9422_1692e411c7bc.slice. Jan 23 01:04:18.120207 systemd[1]: Created slice kubepods-besteffort-pod16cb6344_7ecd_43dd_aa88_18d498591102.slice - libcontainer container kubepods-besteffort-pod16cb6344_7ecd_43dd_aa88_18d498591102.slice. Jan 23 01:04:18.124327 containerd[1557]: time="2026-01-23T01:04:18.123849050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vbcl,Uid:16cb6344-7ecd-43dd-aa88-18d498591102,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:18.160097 kubelet[2733]: I0123 01:04:18.159989 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctw6x\" (UniqueName: \"kubernetes.io/projected/864d071c-38d9-4c87-9ba2-e5d2783e5cdc-kube-api-access-ctw6x\") pod \"calico-kube-controllers-68bb9cdd99-9nmwm\" (UID: \"864d071c-38d9-4c87-9ba2-e5d2783e5cdc\") " pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" Jan 23 01:04:18.160097 kubelet[2733]: I0123 01:04:18.160030 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864d071c-38d9-4c87-9ba2-e5d2783e5cdc-tigera-ca-bundle\") pod \"calico-kube-controllers-68bb9cdd99-9nmwm\" (UID: \"864d071c-38d9-4c87-9ba2-e5d2783e5cdc\") " pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" Jan 23 01:04:18.160097 kubelet[2733]: I0123 01:04:18.160052 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8fdc5e3-83f6-414b-bda4-0c1884c70d80-calico-apiserver-certs\") pod \"calico-apiserver-549f748967-5pdhf\" (UID: \"e8fdc5e3-83f6-414b-bda4-0c1884c70d80\") " pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" Jan 23 01:04:18.160097 kubelet[2733]: I0123 01:04:18.160066 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9961df16-6144-417e-bebd-56649ddba7b2-config-volume\") pod \"coredns-674b8bbfcf-dhrfc\" (UID: \"9961df16-6144-417e-bebd-56649ddba7b2\") " pod="kube-system/coredns-674b8bbfcf-dhrfc" Jan 23 01:04:18.160097 kubelet[2733]: I0123 01:04:18.160083 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8dwm\" (UniqueName: \"kubernetes.io/projected/9961df16-6144-417e-bebd-56649ddba7b2-kube-api-access-p8dwm\") pod \"coredns-674b8bbfcf-dhrfc\" (UID: \"9961df16-6144-417e-bebd-56649ddba7b2\") " pod="kube-system/coredns-674b8bbfcf-dhrfc" Jan 23 01:04:18.160333 kubelet[2733]: I0123 01:04:18.160101 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6a31a37-0b51-4596-b51a-38cbea1a13d6-config-volume\") pod \"coredns-674b8bbfcf-swb6c\" (UID: \"b6a31a37-0b51-4596-b51a-38cbea1a13d6\") " pod="kube-system/coredns-674b8bbfcf-swb6c" Jan 23 01:04:18.160333 kubelet[2733]: I0123 01:04:18.160114 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbr4\" (UniqueName: \"kubernetes.io/projected/b6a31a37-0b51-4596-b51a-38cbea1a13d6-kube-api-access-5sbr4\") pod \"coredns-674b8bbfcf-swb6c\" (UID: \"b6a31a37-0b51-4596-b51a-38cbea1a13d6\") " pod="kube-system/coredns-674b8bbfcf-swb6c" Jan 23 01:04:18.160333 kubelet[2733]: I0123 01:04:18.160128 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwnh\" (UniqueName: \"kubernetes.io/projected/e8fdc5e3-83f6-414b-bda4-0c1884c70d80-kube-api-access-kgwnh\") pod \"calico-apiserver-549f748967-5pdhf\" (UID: \"e8fdc5e3-83f6-414b-bda4-0c1884c70d80\") " pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" Jan 23 01:04:18.181948 kubelet[2733]: E0123 01:04:18.181884 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:18.183351 containerd[1557]: time="2026-01-23T01:04:18.183307387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:04:18.194858 containerd[1557]: time="2026-01-23T01:04:18.194675634Z" level=error msg="Failed to destroy network for sandbox \"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.196857 containerd[1557]: time="2026-01-23T01:04:18.196265284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vbcl,Uid:16cb6344-7ecd-43dd-aa88-18d498591102,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.197471 kubelet[2733]: E0123 01:04:18.197294 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.197471 kubelet[2733]: E0123 01:04:18.197340 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:18.197471 kubelet[2733]: E0123 01:04:18.197360 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vbcl" Jan 23 01:04:18.197578 kubelet[2733]: E0123 01:04:18.197394 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa935e15ebc98f2aff27fccfe68c7cfd07f9e38fcb0c8dde88621bd59a28d641\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:18.260957 kubelet[2733]: I0123 01:04:18.260926 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e68af0d7-5f9f-4004-bfb5-105e45ad7f04-goldmane-key-pair\") pod \"goldmane-666569f655-z8t58\" (UID: \"e68af0d7-5f9f-4004-bfb5-105e45ad7f04\") " pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.260957 kubelet[2733]: I0123 01:04:18.260956 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-ca-bundle\") pod \"whisker-75fdf5456d-xtn74\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " pod="calico-system/whisker-75fdf5456d-xtn74" Jan 23 01:04:18.262372 kubelet[2733]: I0123 01:04:18.261007 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68af0d7-5f9f-4004-bfb5-105e45ad7f04-config\") pod \"goldmane-666569f655-z8t58\" (UID: \"e68af0d7-5f9f-4004-bfb5-105e45ad7f04\") " pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.262372 kubelet[2733]: I0123 01:04:18.261031 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pz4\" (UniqueName: \"kubernetes.io/projected/e68af0d7-5f9f-4004-bfb5-105e45ad7f04-kube-api-access-v7pz4\") pod \"goldmane-666569f655-z8t58\" (UID: \"e68af0d7-5f9f-4004-bfb5-105e45ad7f04\") " pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.262372 kubelet[2733]: I0123 01:04:18.261070 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39650212-2ffc-42da-8b29-3a9e9efdade1-calico-apiserver-certs\") pod \"calico-apiserver-549f748967-tk79b\" (UID: \"39650212-2ffc-42da-8b29-3a9e9efdade1\") " pod="calico-apiserver/calico-apiserver-549f748967-tk79b" Jan 23 01:04:18.262372 kubelet[2733]: I0123 01:04:18.261083 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2xj\" (UniqueName: \"kubernetes.io/projected/39650212-2ffc-42da-8b29-3a9e9efdade1-kube-api-access-7r2xj\") pod \"calico-apiserver-549f748967-tk79b\" (UID: \"39650212-2ffc-42da-8b29-3a9e9efdade1\") " pod="calico-apiserver/calico-apiserver-549f748967-tk79b" Jan 23 01:04:18.262372 kubelet[2733]: I0123 01:04:18.261097 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e68af0d7-5f9f-4004-bfb5-105e45ad7f04-goldmane-ca-bundle\") pod \"goldmane-666569f655-z8t58\" (UID: \"e68af0d7-5f9f-4004-bfb5-105e45ad7f04\") " pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.262501 kubelet[2733]: I0123 01:04:18.261111 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-backend-key-pair\") pod \"whisker-75fdf5456d-xtn74\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " pod="calico-system/whisker-75fdf5456d-xtn74" Jan 23 01:04:18.262501 kubelet[2733]: I0123 01:04:18.261128 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75sld\" (UniqueName: \"kubernetes.io/projected/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-kube-api-access-75sld\") pod \"whisker-75fdf5456d-xtn74\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " pod="calico-system/whisker-75fdf5456d-xtn74" Jan 23 01:04:18.317244 systemd[1]: run-netns-cni\x2d75ef5140\x2dfff6\x2d3e33\x2d0067\x2d44c3f17127a2.mount: Deactivated successfully. Jan 23 01:04:18.357178 kubelet[2733]: E0123 01:04:18.357153 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:18.357559 containerd[1557]: time="2026-01-23T01:04:18.357521483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swb6c,Uid:b6a31a37-0b51-4596-b51a-38cbea1a13d6,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:18.375636 containerd[1557]: time="2026-01-23T01:04:18.375503426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-5pdhf,Uid:e8fdc5e3-83f6-414b-bda4-0c1884c70d80,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:18.395474 containerd[1557]: time="2026-01-23T01:04:18.395439967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb9cdd99-9nmwm,Uid:864d071c-38d9-4c87-9ba2-e5d2783e5cdc,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:18.400977 kubelet[2733]: E0123 01:04:18.400728 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:18.403036 containerd[1557]: time="2026-01-23T01:04:18.402975447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhrfc,Uid:9961df16-6144-417e-bebd-56649ddba7b2,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:18.420056 containerd[1557]: time="2026-01-23T01:04:18.420030857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fdf5456d-xtn74,Uid:4208ebfd-8f2b-4fdd-9422-1692e411c7bc,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:18.528128 containerd[1557]: time="2026-01-23T01:04:18.528081997Z" level=error msg="Failed to destroy network for sandbox \"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.533359 containerd[1557]: time="2026-01-23T01:04:18.533323429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swb6c,Uid:b6a31a37-0b51-4596-b51a-38cbea1a13d6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.533946 kubelet[2733]: E0123 01:04:18.533727 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.533946 kubelet[2733]: E0123 01:04:18.533889 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-swb6c" Jan 23 01:04:18.533946 kubelet[2733]: E0123 01:04:18.533912 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-swb6c" Jan 23 01:04:18.534718 kubelet[2733]: E0123 01:04:18.534510 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-swb6c_kube-system(b6a31a37-0b51-4596-b51a-38cbea1a13d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-swb6c_kube-system(b6a31a37-0b51-4596-b51a-38cbea1a13d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b1172239fa4657a6d2f3ccbc49960043f6f8b545454d32e583139b6a0ebcbd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-swb6c" podUID="b6a31a37-0b51-4596-b51a-38cbea1a13d6" Jan 23 01:04:18.547011 containerd[1557]: time="2026-01-23T01:04:18.546873140Z" level=error msg="Failed to destroy network for sandbox \"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.549008 containerd[1557]: time="2026-01-23T01:04:18.548978193Z" level=error msg="Failed to destroy network for sandbox \"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.549202 containerd[1557]: time="2026-01-23T01:04:18.549127357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-5pdhf,Uid:e8fdc5e3-83f6-414b-bda4-0c1884c70d80,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.549710 kubelet[2733]: E0123 01:04:18.549547 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.549710 kubelet[2733]: E0123 01:04:18.549606 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" Jan 23 01:04:18.549710 kubelet[2733]: E0123 01:04:18.549635 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" Jan 23 01:04:18.549797 kubelet[2733]: E0123 01:04:18.549752 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d8d588d56167c9dfad7991f8604f90492e6c719455c48e1cafbd97bc8a4653c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:04:18.551354 containerd[1557]: time="2026-01-23T01:04:18.551194429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fdf5456d-xtn74,Uid:4208ebfd-8f2b-4fdd-9422-1692e411c7bc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.551415 kubelet[2733]: E0123 01:04:18.551384 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.551453 kubelet[2733]: E0123 01:04:18.551414 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75fdf5456d-xtn74" Jan 23 01:04:18.551453 kubelet[2733]: E0123 01:04:18.551434 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75fdf5456d-xtn74" Jan 23 01:04:18.551598 kubelet[2733]: E0123 01:04:18.551496 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75fdf5456d-xtn74_calico-system(4208ebfd-8f2b-4fdd-9422-1692e411c7bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75fdf5456d-xtn74_calico-system(4208ebfd-8f2b-4fdd-9422-1692e411c7bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b14682492c34edd3b4de8070e153efbff1d08f3b798d577b8a8dbb241d61f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75fdf5456d-xtn74" podUID="4208ebfd-8f2b-4fdd-9422-1692e411c7bc" Jan 23 01:04:18.553491 containerd[1557]: time="2026-01-23T01:04:18.553445675Z" level=error msg="Failed to destroy network for sandbox \"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.554431 containerd[1557]: time="2026-01-23T01:04:18.554145183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhrfc,Uid:9961df16-6144-417e-bebd-56649ddba7b2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.554505 kubelet[2733]: E0123 01:04:18.554258 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.554505 kubelet[2733]: E0123 01:04:18.554348 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhrfc" Jan 23 01:04:18.554505 kubelet[2733]: E0123 01:04:18.554365 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhrfc" Jan 23 01:04:18.554580 kubelet[2733]: E0123 01:04:18.554400 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dhrfc_kube-system(9961df16-6144-417e-bebd-56649ddba7b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dhrfc_kube-system(9961df16-6144-417e-bebd-56649ddba7b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26994294ed42c5495948f72ea597db95b4e436c59f5b151b7d9ba868e39095da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dhrfc" podUID="9961df16-6144-417e-bebd-56649ddba7b2" Jan 23 01:04:18.557538 containerd[1557]: time="2026-01-23T01:04:18.557489487Z" level=error msg="Failed to destroy network for sandbox \"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.558189 containerd[1557]: time="2026-01-23T01:04:18.558156164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb9cdd99-9nmwm,Uid:864d071c-38d9-4c87-9ba2-e5d2783e5cdc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.558746 kubelet[2733]: E0123 01:04:18.558312 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.558746 kubelet[2733]: E0123 01:04:18.558338 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" Jan 23 01:04:18.558746 kubelet[2733]: E0123 01:04:18.558358 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" Jan 23 01:04:18.558828 kubelet[2733]: E0123 01:04:18.558388 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd134046241f40fda247cbe10ff0b39ed0c9242d66abed285c9abc6e6a51a41b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:04:18.701808 containerd[1557]: time="2026-01-23T01:04:18.701496173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-tk79b,Uid:39650212-2ffc-42da-8b29-3a9e9efdade1,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:18.710824 containerd[1557]: time="2026-01-23T01:04:18.710784407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t58,Uid:e68af0d7-5f9f-4004-bfb5-105e45ad7f04,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:18.782731 containerd[1557]: time="2026-01-23T01:04:18.782560954Z" level=error msg="Failed to destroy network for sandbox \"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.784379 containerd[1557]: time="2026-01-23T01:04:18.784352179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-tk79b,Uid:39650212-2ffc-42da-8b29-3a9e9efdade1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.785308 kubelet[2733]: E0123 01:04:18.785142 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.785308 kubelet[2733]: E0123 01:04:18.785218 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" Jan 23 01:04:18.785308 kubelet[2733]: E0123 01:04:18.785240 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" Jan 23 01:04:18.785934 kubelet[2733]: E0123 01:04:18.785485 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d452345a5dbbc19320669704cbb46f09751154b91c6f27748705ec80cdcabf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:04:18.795856 containerd[1557]: time="2026-01-23T01:04:18.795821738Z" level=error msg="Failed to destroy network for sandbox \"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.800888 containerd[1557]: time="2026-01-23T01:04:18.797043828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t58,Uid:e68af0d7-5f9f-4004-bfb5-105e45ad7f04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.800996 kubelet[2733]: E0123 01:04:18.797492 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:04:18.800996 kubelet[2733]: E0123 01:04:18.797526 2733 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.800996 kubelet[2733]: E0123 01:04:18.797543 2733 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z8t58" Jan 23 01:04:18.801075 kubelet[2733]: E0123 01:04:18.797582 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6db962492b7dde5a1173d29c54327b1a72cf8b020684cbde0b03871b3959e4af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:04:19.322008 systemd[1]: run-netns-cni\x2dcea1ac05\x2d0092\x2daeb9\x2d596d\x2d4e72fd265ffa.mount: Deactivated successfully. Jan 23 01:04:19.322485 systemd[1]: run-netns-cni\x2dbb8b7469\x2d120b\x2dcf85\x2d1ce4\x2d1efdf7485e79.mount: Deactivated successfully. Jan 23 01:04:21.561063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064458246.mount: Deactivated successfully. Jan 23 01:04:21.589617 containerd[1557]: time="2026-01-23T01:04:21.589003139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:21.589617 containerd[1557]: time="2026-01-23T01:04:21.589554710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:04:21.590119 containerd[1557]: time="2026-01-23T01:04:21.590097602Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:21.591672 containerd[1557]: time="2026-01-23T01:04:21.591652794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:04:21.592102 containerd[1557]: time="2026-01-23T01:04:21.592004771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.4085367s" Jan 23 01:04:21.592171 containerd[1557]: time="2026-01-23T01:04:21.592158424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:04:21.610928 containerd[1557]: time="2026-01-23T01:04:21.610904961Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:04:21.620871 containerd[1557]: time="2026-01-23T01:04:21.620850846Z" level=info msg="Container 7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:21.626809 containerd[1557]: time="2026-01-23T01:04:21.626788479Z" level=info msg="CreateContainer within sandbox \"f2d6b1a88740657c5181e3bbfe503da112c5ac0bbeb56e003ca36e1ffbf863d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066\"" Jan 23 01:04:21.628149 containerd[1557]: time="2026-01-23T01:04:21.628083046Z" level=info msg="StartContainer for \"7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066\"" Jan 23 01:04:21.629661 containerd[1557]: time="2026-01-23T01:04:21.629630558Z" level=info msg="connecting to shim 7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066" address="unix:///run/containerd/s/2664312caf58b72f8a99ecf72a81ea7a6d347d8a70aeaaedfdf156d39a16d6fe" protocol=ttrpc version=3 Jan 23 01:04:21.675433 systemd[1]: Started cri-containerd-7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066.scope - libcontainer container 7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066. Jan 23 01:04:21.761693 containerd[1557]: time="2026-01-23T01:04:21.761649945Z" level=info msg="StartContainer for \"7e4c27b1e8028797d87d3cc58356db043971356978510eb1c9e0e94b23a60066\" returns successfully" Jan 23 01:04:21.836792 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:04:21.836894 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:04:21.981113 kubelet[2733]: I0123 01:04:21.981062 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-ca-bundle\") pod \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " Jan 23 01:04:21.982051 kubelet[2733]: I0123 01:04:21.981321 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75sld\" (UniqueName: \"kubernetes.io/projected/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-kube-api-access-75sld\") pod \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " Jan 23 01:04:21.982051 kubelet[2733]: I0123 01:04:21.981460 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-backend-key-pair\") pod \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\" (UID: \"4208ebfd-8f2b-4fdd-9422-1692e411c7bc\") " Jan 23 01:04:21.982051 kubelet[2733]: I0123 01:04:21.981514 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4208ebfd-8f2b-4fdd-9422-1692e411c7bc" (UID: "4208ebfd-8f2b-4fdd-9422-1692e411c7bc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:04:21.982051 kubelet[2733]: I0123 01:04:21.981639 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-ca-bundle\") on node \"172-239-192-168\" DevicePath \"\"" Jan 23 01:04:21.985612 kubelet[2733]: I0123 01:04:21.985556 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-kube-api-access-75sld" (OuterVolumeSpecName: "kube-api-access-75sld") pod "4208ebfd-8f2b-4fdd-9422-1692e411c7bc" (UID: "4208ebfd-8f2b-4fdd-9422-1692e411c7bc"). InnerVolumeSpecName "kube-api-access-75sld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:04:21.986006 kubelet[2733]: I0123 01:04:21.985989 2733 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4208ebfd-8f2b-4fdd-9422-1692e411c7bc" (UID: "4208ebfd-8f2b-4fdd-9422-1692e411c7bc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:04:22.082444 kubelet[2733]: I0123 01:04:22.082398 2733 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-75sld\" (UniqueName: \"kubernetes.io/projected/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-kube-api-access-75sld\") on node \"172-239-192-168\" DevicePath \"\"" Jan 23 01:04:22.082444 kubelet[2733]: I0123 01:04:22.082421 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4208ebfd-8f2b-4fdd-9422-1692e411c7bc-whisker-backend-key-pair\") on node \"172-239-192-168\" DevicePath \"\"" Jan 23 01:04:22.090345 systemd[1]: Removed slice kubepods-besteffort-pod4208ebfd_8f2b_4fdd_9422_1692e411c7bc.slice - libcontainer container kubepods-besteffort-pod4208ebfd_8f2b_4fdd_9422_1692e411c7bc.slice. Jan 23 01:04:22.194762 kubelet[2733]: E0123 01:04:22.194602 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:22.207298 kubelet[2733]: I0123 01:04:22.207240 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k6nwj" podStartSLOduration=1.834324448 podStartE2EDuration="11.207211277s" podCreationTimestamp="2026-01-23 01:04:11 +0000 UTC" firstStartedPulling="2026-01-23 01:04:12.220174183 +0000 UTC m=+20.223518645" lastFinishedPulling="2026-01-23 01:04:21.593061012 +0000 UTC m=+29.596405474" observedRunningTime="2026-01-23 01:04:22.205792919 +0000 UTC m=+30.209137381" watchObservedRunningTime="2026-01-23 01:04:22.207211277 +0000 UTC m=+30.210555749" Jan 23 01:04:22.271172 systemd[1]: Created slice kubepods-besteffort-podc3bfb933_e96b_46ea_8ee6_2c44dc35b631.slice - libcontainer container kubepods-besteffort-podc3bfb933_e96b_46ea_8ee6_2c44dc35b631.slice. Jan 23 01:04:22.283264 kubelet[2733]: I0123 01:04:22.283170 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxdj\" (UniqueName: \"kubernetes.io/projected/c3bfb933-e96b-46ea-8ee6-2c44dc35b631-kube-api-access-xdxdj\") pod \"whisker-9cdd954b6-lp7jl\" (UID: \"c3bfb933-e96b-46ea-8ee6-2c44dc35b631\") " pod="calico-system/whisker-9cdd954b6-lp7jl" Jan 23 01:04:22.284487 kubelet[2733]: I0123 01:04:22.284469 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3bfb933-e96b-46ea-8ee6-2c44dc35b631-whisker-ca-bundle\") pod \"whisker-9cdd954b6-lp7jl\" (UID: \"c3bfb933-e96b-46ea-8ee6-2c44dc35b631\") " pod="calico-system/whisker-9cdd954b6-lp7jl" Jan 23 01:04:22.284916 kubelet[2733]: I0123 01:04:22.284898 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3bfb933-e96b-46ea-8ee6-2c44dc35b631-whisker-backend-key-pair\") pod \"whisker-9cdd954b6-lp7jl\" (UID: \"c3bfb933-e96b-46ea-8ee6-2c44dc35b631\") " pod="calico-system/whisker-9cdd954b6-lp7jl" Jan 23 01:04:22.562625 systemd[1]: var-lib-kubelet-pods-4208ebfd\x2d8f2b\x2d4fdd\x2d9422\x2d1692e411c7bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75sld.mount: Deactivated successfully. Jan 23 01:04:22.562730 systemd[1]: var-lib-kubelet-pods-4208ebfd\x2d8f2b\x2d4fdd\x2d9422\x2d1692e411c7bc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:04:22.579501 containerd[1557]: time="2026-01-23T01:04:22.579467694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cdd954b6-lp7jl,Uid:c3bfb933-e96b-46ea-8ee6-2c44dc35b631,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:22.710863 systemd-networkd[1444]: calia4a4209bd5e: Link UP Jan 23 01:04:22.712004 systemd-networkd[1444]: calia4a4209bd5e: Gained carrier Jan 23 01:04:22.728151 containerd[1557]: 2026-01-23 01:04:22.615 [INFO][3750] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:04:22.728151 containerd[1557]: 2026-01-23 01:04:22.648 [INFO][3750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0 whisker-9cdd954b6- calico-system c3bfb933-e96b-46ea-8ee6-2c44dc35b631 938 0 2026-01-23 01:04:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9cdd954b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-192-168 whisker-9cdd954b6-lp7jl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4a4209bd5e [] [] }} ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-" Jan 23 01:04:22.728151 containerd[1557]: 2026-01-23 01:04:22.648 [INFO][3750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.728151 containerd[1557]: 2026-01-23 01:04:22.671 [INFO][3762] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" HandleID="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Workload="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.671 [INFO][3762] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" HandleID="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Workload="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f240), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-192-168", "pod":"whisker-9cdd954b6-lp7jl", "timestamp":"2026-01-23 01:04:22.671552665 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.671 [INFO][3762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.671 [INFO][3762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.671 [INFO][3762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.677 [INFO][3762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" host="172-239-192-168" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.682 [INFO][3762] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.686 [INFO][3762] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.687 [INFO][3762] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.689 [INFO][3762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:22.728678 containerd[1557]: 2026-01-23 01:04:22.689 [INFO][3762] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" host="172-239-192-168" Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.690 [INFO][3762] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.693 [INFO][3762] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" host="172-239-192-168" Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.697 [INFO][3762] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.1/26] block=192.168.8.0/26 handle="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" host="172-239-192-168" Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.697 [INFO][3762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.1/26] handle="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" host="172-239-192-168" Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.697 [INFO][3762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:22.728872 containerd[1557]: 2026-01-23 01:04:22.697 [INFO][3762] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.1/26] IPv6=[] ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" HandleID="k8s-pod-network.9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Workload="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.729147 containerd[1557]: 2026-01-23 01:04:22.701 [INFO][3750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0", GenerateName:"whisker-9cdd954b6-", Namespace:"calico-system", SelfLink:"", UID:"c3bfb933-e96b-46ea-8ee6-2c44dc35b631", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9cdd954b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"whisker-9cdd954b6-lp7jl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4a4209bd5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:22.729147 containerd[1557]: 2026-01-23 01:04:22.701 [INFO][3750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.1/32] ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.729321 containerd[1557]: 2026-01-23 01:04:22.701 [INFO][3750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a4209bd5e ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.729321 containerd[1557]: 2026-01-23 01:04:22.712 [INFO][3750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.729391 containerd[1557]: 2026-01-23 01:04:22.713 [INFO][3750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0", GenerateName:"whisker-9cdd954b6-", Namespace:"calico-system", SelfLink:"", UID:"c3bfb933-e96b-46ea-8ee6-2c44dc35b631", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9cdd954b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec", Pod:"whisker-9cdd954b6-lp7jl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.8.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4a4209bd5e", MAC:"32:63:87:5c:ca:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:22.729618 containerd[1557]: 2026-01-23 01:04:22.725 [INFO][3750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" Namespace="calico-system" Pod="whisker-9cdd954b6-lp7jl" WorkloadEndpoint="172--239--192--168-k8s-whisker--9cdd954b6--lp7jl-eth0" Jan 23 01:04:22.765567 containerd[1557]: time="2026-01-23T01:04:22.765496041Z" level=info msg="connecting to shim 9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec" address="unix:///run/containerd/s/7d11f86973d4016bfe85bda6e821ebf256a9c6e7de86be5487e3ae2415fb6ab5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:22.794397 systemd[1]: Started cri-containerd-9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec.scope - libcontainer container 9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec. Jan 23 01:04:22.844980 containerd[1557]: time="2026-01-23T01:04:22.844900436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9cdd954b6-lp7jl,Uid:c3bfb933-e96b-46ea-8ee6-2c44dc35b631,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c2e3d2ef0e3e74306c3d061f7d00a33633082136f1b6168e9d935565f9050ec\"" Jan 23 01:04:22.846723 containerd[1557]: time="2026-01-23T01:04:22.846673500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:04:22.977876 containerd[1557]: time="2026-01-23T01:04:22.977816996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:22.978527 containerd[1557]: time="2026-01-23T01:04:22.978503519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:04:22.978527 containerd[1557]: time="2026-01-23T01:04:22.978548120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:04:22.978676 kubelet[2733]: E0123 01:04:22.978644 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:04:22.978735 kubelet[2733]: E0123 01:04:22.978686 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:04:22.978821 kubelet[2733]: E0123 01:04:22.978793 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3a4f6b1e42494c1faf3d8022f0ad3fee,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:22.980707 containerd[1557]: time="2026-01-23T01:04:22.980548778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:04:23.108055 containerd[1557]: time="2026-01-23T01:04:23.107951008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:23.109080 containerd[1557]: time="2026-01-23T01:04:23.109051259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:04:23.109167 containerd[1557]: time="2026-01-23T01:04:23.109069049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:04:23.109301 kubelet[2733]: E0123 01:04:23.109252 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:04:23.109675 kubelet[2733]: E0123 01:04:23.109314 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:04:23.109736 kubelet[2733]: E0123 01:04:23.109405 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:23.110758 kubelet[2733]: E0123 01:04:23.110717 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:04:23.196304 kubelet[2733]: I0123 01:04:23.195697 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:04:23.196304 kubelet[2733]: E0123 01:04:23.195989 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:23.198062 kubelet[2733]: E0123 01:04:23.197993 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:04:24.076615 kubelet[2733]: I0123 01:04:24.076567 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4208ebfd-8f2b-4fdd-9422-1692e411c7bc" path="/var/lib/kubelet/pods/4208ebfd-8f2b-4fdd-9422-1692e411c7bc/volumes" Jan 23 01:04:24.198073 kubelet[2733]: E0123 01:04:24.197640 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:24.201840 kubelet[2733]: E0123 01:04:24.201777 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:04:24.741540 systemd-networkd[1444]: calia4a4209bd5e: Gained IPv6LL Jan 23 01:04:28.838339 kubelet[2733]: I0123 01:04:28.838092 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:04:28.840995 kubelet[2733]: E0123 01:04:28.840682 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:29.075681 kubelet[2733]: E0123 01:04:29.075613 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:29.077343 containerd[1557]: time="2026-01-23T01:04:29.076341699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swb6c,Uid:b6a31a37-0b51-4596-b51a-38cbea1a13d6,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:29.077864 containerd[1557]: time="2026-01-23T01:04:29.076528920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vbcl,Uid:16cb6344-7ecd-43dd-aa88-18d498591102,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:29.206028 kubelet[2733]: E0123 01:04:29.205478 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:29.236081 systemd-networkd[1444]: cali9d6f9fff5ef: Link UP Jan 23 01:04:29.237235 systemd-networkd[1444]: cali9d6f9fff5ef: Gained carrier Jan 23 01:04:29.262638 containerd[1557]: 2026-01-23 01:04:29.136 [INFO][4098] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:04:29.262638 containerd[1557]: 2026-01-23 01:04:29.155 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0 coredns-674b8bbfcf- kube-system b6a31a37-0b51-4596-b51a-38cbea1a13d6 869 0 2026-01-23 01:03:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-192-168 coredns-674b8bbfcf-swb6c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d6f9fff5ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-" Jan 23 01:04:29.262638 containerd[1557]: 2026-01-23 01:04:29.155 [INFO][4098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.262638 containerd[1557]: 2026-01-23 01:04:29.185 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" HandleID="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.186 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" HandleID="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-192-168", "pod":"coredns-674b8bbfcf-swb6c", "timestamp":"2026-01-23 01:04:29.185986188 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.186 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.186 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.186 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.195 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" host="172-239-192-168" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.200 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.205 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.208 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.213 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.262821 containerd[1557]: 2026-01-23 01:04:29.213 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" host="172-239-192-168" Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.214 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.217 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" host="172-239-192-168" Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.222 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.2/26] block=192.168.8.0/26 handle="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" host="172-239-192-168" Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.223 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.2/26] handle="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" host="172-239-192-168" Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.223 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:29.264862 containerd[1557]: 2026-01-23 01:04:29.223 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.2/26] IPv6=[] ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" HandleID="k8s-pod-network.c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.227 [INFO][4098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6a31a37-0b51-4596-b51a-38cbea1a13d6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"coredns-674b8bbfcf-swb6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d6f9fff5ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.227 [INFO][4098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.2/32] ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.229 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d6f9fff5ef ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.235 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.236 [INFO][4098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b6a31a37-0b51-4596-b51a-38cbea1a13d6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e", Pod:"coredns-674b8bbfcf-swb6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d6f9fff5ef", MAC:"26:d3:65:02:65:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:29.266346 containerd[1557]: 2026-01-23 01:04:29.256 [INFO][4098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" Namespace="kube-system" Pod="coredns-674b8bbfcf-swb6c" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--swb6c-eth0" Jan 23 01:04:29.295079 containerd[1557]: time="2026-01-23T01:04:29.294986859Z" level=info msg="connecting to shim c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e" address="unix:///run/containerd/s/7a801c2960c2045408c50fd568632e64e35115e8cabe90e57a7cb6f23123bc0b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:29.339611 systemd-networkd[1444]: cali2f03bbdd8ef: Link UP Jan 23 01:04:29.342381 systemd-networkd[1444]: cali2f03bbdd8ef: Gained carrier Jan 23 01:04:29.343437 systemd[1]: Started cri-containerd-c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e.scope - libcontainer container c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e. Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.142 [INFO][4105] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.161 [INFO][4105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-csi--node--driver--9vbcl-eth0 csi-node-driver- calico-system 16cb6344-7ecd-43dd-aa88-18d498591102 771 0 2026-01-23 01:04:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-192-168 csi-node-driver-9vbcl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2f03bbdd8ef [] [] }} ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.161 [INFO][4105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.199 [INFO][4128] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" HandleID="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Workload="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.199 [INFO][4128] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" HandleID="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Workload="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-192-168", "pod":"csi-node-driver-9vbcl", "timestamp":"2026-01-23 01:04:29.199299119 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.199 [INFO][4128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.223 [INFO][4128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.224 [INFO][4128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.295 [INFO][4128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.304 [INFO][4128] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.310 [INFO][4128] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.316 [INFO][4128] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.321 [INFO][4128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.321 [INFO][4128] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.323 [INFO][4128] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.327 [INFO][4128] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.333 [INFO][4128] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.3/26] block=192.168.8.0/26 handle="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.333 [INFO][4128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.3/26] handle="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" host="172-239-192-168" Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.333 [INFO][4128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:29.371467 containerd[1557]: 2026-01-23 01:04:29.333 [INFO][4128] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.3/26] IPv6=[] ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" HandleID="k8s-pod-network.01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Workload="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.336 [INFO][4105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-csi--node--driver--9vbcl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16cb6344-7ecd-43dd-aa88-18d498591102", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"csi-node-driver-9vbcl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f03bbdd8ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.336 [INFO][4105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.3/32] ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.336 [INFO][4105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f03bbdd8ef ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.341 [INFO][4105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.342 [INFO][4105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-csi--node--driver--9vbcl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16cb6344-7ecd-43dd-aa88-18d498591102", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c", Pod:"csi-node-driver-9vbcl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f03bbdd8ef", MAC:"56:5c:6e:47:aa:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:29.372255 containerd[1557]: 2026-01-23 01:04:29.365 [INFO][4105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" Namespace="calico-system" Pod="csi-node-driver-9vbcl" WorkloadEndpoint="172--239--192--168-k8s-csi--node--driver--9vbcl-eth0" Jan 23 01:04:29.406746 containerd[1557]: time="2026-01-23T01:04:29.406327010Z" level=info msg="connecting to shim 01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c" address="unix:///run/containerd/s/8760fdfd9a575abd54c3d80e886b319c0100fe3667e3ae710cd2fe4394dda928" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:29.443412 systemd[1]: Started cri-containerd-01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c.scope - libcontainer container 01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c. Jan 23 01:04:29.449969 containerd[1557]: time="2026-01-23T01:04:29.449933308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swb6c,Uid:b6a31a37-0b51-4596-b51a-38cbea1a13d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e\"" Jan 23 01:04:29.451187 kubelet[2733]: E0123 01:04:29.450800 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:29.455000 containerd[1557]: time="2026-01-23T01:04:29.454962129Z" level=info msg="CreateContainer within sandbox \"c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:04:29.462855 containerd[1557]: time="2026-01-23T01:04:29.462778823Z" level=info msg="Container 4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:29.468534 containerd[1557]: time="2026-01-23T01:04:29.468491273Z" level=info msg="CreateContainer within sandbox \"c890bae6a1424adc6e48220965b206169f5f576359a6c5f97e83dd8e5144f49e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf\"" Jan 23 01:04:29.469080 containerd[1557]: time="2026-01-23T01:04:29.469058790Z" level=info msg="StartContainer for \"4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf\"" Jan 23 01:04:29.469907 containerd[1557]: time="2026-01-23T01:04:29.469884490Z" level=info msg="connecting to shim 4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf" address="unix:///run/containerd/s/7a801c2960c2045408c50fd568632e64e35115e8cabe90e57a7cb6f23123bc0b" protocol=ttrpc version=3 Jan 23 01:04:29.498447 systemd[1]: Started cri-containerd-4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf.scope - libcontainer container 4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf. Jan 23 01:04:29.514997 containerd[1557]: time="2026-01-23T01:04:29.514830455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vbcl,Uid:16cb6344-7ecd-43dd-aa88-18d498591102,Namespace:calico-system,Attempt:0,} returns sandbox id \"01aa749142877dbb1ab778f8a34c4f8bd58dde51dd4db6006550356e4460907c\"" Jan 23 01:04:29.517715 containerd[1557]: time="2026-01-23T01:04:29.517602289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:04:29.549916 containerd[1557]: time="2026-01-23T01:04:29.549886840Z" level=info msg="StartContainer for \"4a9f7bac0590c15393ff86a8a201c93e5a3000a7b7e28f2543e2777dfe0a6fdf\" returns successfully" Jan 23 01:04:29.721726 containerd[1557]: time="2026-01-23T01:04:29.721547141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:29.723368 containerd[1557]: time="2026-01-23T01:04:29.723306773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:04:29.724123 containerd[1557]: time="2026-01-23T01:04:29.723309103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:04:29.724814 kubelet[2733]: E0123 01:04:29.724325 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:04:29.724814 kubelet[2733]: E0123 01:04:29.724389 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:04:29.724814 kubelet[2733]: E0123 01:04:29.724551 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:29.729507 containerd[1557]: time="2026-01-23T01:04:29.729463517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:04:29.902305 containerd[1557]: time="2026-01-23T01:04:29.901470533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:29.903997 containerd[1557]: time="2026-01-23T01:04:29.903829732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:04:29.903997 containerd[1557]: time="2026-01-23T01:04:29.903852392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:04:29.904247 kubelet[2733]: E0123 01:04:29.904174 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:04:29.904247 kubelet[2733]: E0123 01:04:29.904242 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:04:29.905390 kubelet[2733]: E0123 01:04:29.904379 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:29.905824 kubelet[2733]: E0123 01:04:29.905751 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:30.154976 systemd-networkd[1444]: vxlan.calico: Link UP Jan 23 01:04:30.154991 systemd-networkd[1444]: vxlan.calico: Gained carrier Jan 23 01:04:30.214019 kubelet[2733]: E0123 01:04:30.213905 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:30.216837 kubelet[2733]: E0123 01:04:30.216646 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:30.242787 kubelet[2733]: I0123 01:04:30.242217 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-swb6c" podStartSLOduration=31.242083563 podStartE2EDuration="31.242083563s" podCreationTimestamp="2026-01-23 01:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:04:30.240388553 +0000 UTC m=+38.243733025" watchObservedRunningTime="2026-01-23 01:04:30.242083563 +0000 UTC m=+38.245428025" Jan 23 01:04:30.948522 systemd-networkd[1444]: cali9d6f9fff5ef: Gained IPv6LL Jan 23 01:04:31.075629 containerd[1557]: time="2026-01-23T01:04:31.075577246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb9cdd99-9nmwm,Uid:864d071c-38d9-4c87-9ba2-e5d2783e5cdc,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:31.076264 containerd[1557]: time="2026-01-23T01:04:31.075997711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t58,Uid:e68af0d7-5f9f-4004-bfb5-105e45ad7f04,Namespace:calico-system,Attempt:0,}" Jan 23 01:04:31.205962 systemd-networkd[1444]: cali2f03bbdd8ef: Gained IPv6LL Jan 23 01:04:31.209694 systemd-networkd[1444]: cali4a579a609a0: Link UP Jan 23 01:04:31.209982 systemd-networkd[1444]: cali4a579a609a0: Gained carrier Jan 23 01:04:31.221955 kubelet[2733]: E0123 01:04:31.221934 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:31.224125 kubelet[2733]: E0123 01:04:31.224088 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.128 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0 calico-kube-controllers-68bb9cdd99- calico-system 864d071c-38d9-4c87-9ba2-e5d2783e5cdc 875 0 2026-01-23 01:04:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68bb9cdd99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-192-168 calico-kube-controllers-68bb9cdd99-9nmwm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a579a609a0 [] [] }} ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.128 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.167 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" HandleID="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Workload="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.167 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" HandleID="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Workload="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-192-168", "pod":"calico-kube-controllers-68bb9cdd99-9nmwm", "timestamp":"2026-01-23 01:04:31.167232037 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.167 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.167 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.167 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.175 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.179 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.183 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.185 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.186 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.187 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.188 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.192 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.197 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.4/26] block=192.168.8.0/26 handle="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.198 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.4/26] handle="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" host="172-239-192-168" Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.198 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:31.235466 containerd[1557]: 2026-01-23 01:04:31.198 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.4/26] IPv6=[] ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" HandleID="k8s-pod-network.5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Workload="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.201 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0", GenerateName:"calico-kube-controllers-68bb9cdd99-", Namespace:"calico-system", SelfLink:"", UID:"864d071c-38d9-4c87-9ba2-e5d2783e5cdc", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb9cdd99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"calico-kube-controllers-68bb9cdd99-9nmwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a579a609a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.201 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.4/32] ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.201 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a579a609a0 ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.208 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.210 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0", GenerateName:"calico-kube-controllers-68bb9cdd99-", Namespace:"calico-system", SelfLink:"", UID:"864d071c-38d9-4c87-9ba2-e5d2783e5cdc", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb9cdd99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b", Pod:"calico-kube-controllers-68bb9cdd99-9nmwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a579a609a0", MAC:"d2:fd:6d:2f:9e:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:31.235973 containerd[1557]: 2026-01-23 01:04:31.224 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" Namespace="calico-system" Pod="calico-kube-controllers-68bb9cdd99-9nmwm" WorkloadEndpoint="172--239--192--168-k8s-calico--kube--controllers--68bb9cdd99--9nmwm-eth0" Jan 23 01:04:31.274905 containerd[1557]: time="2026-01-23T01:04:31.274861097Z" level=info msg="connecting to shim 5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b" address="unix:///run/containerd/s/ea70f6bcc011a7fb90712896ec93c9c4275a87955c4c80f5af267823ce3d2b53" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:31.309419 systemd[1]: Started cri-containerd-5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b.scope - libcontainer container 5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b. Jan 23 01:04:31.327166 systemd-networkd[1444]: cali6800ce66f04: Link UP Jan 23 01:04:31.328314 systemd-networkd[1444]: cali6800ce66f04: Gained carrier Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.136 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0 goldmane-666569f655- calico-system e68af0d7-5f9f-4004-bfb5-105e45ad7f04 877 0 2026-01-23 01:04:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-192-168 goldmane-666569f655-z8t58 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6800ce66f04 [] [] }} ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.136 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.190 [INFO][4425] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" HandleID="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Workload="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.190 [INFO][4425] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" HandleID="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Workload="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367a10), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-192-168", "pod":"goldmane-666569f655-z8t58", "timestamp":"2026-01-23 01:04:31.190540723 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.190 [INFO][4425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.198 [INFO][4425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.198 [INFO][4425] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.282 [INFO][4425] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.292 [INFO][4425] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.297 [INFO][4425] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.301 [INFO][4425] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.306 [INFO][4425] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.306 [INFO][4425] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.307 [INFO][4425] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.312 [INFO][4425] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.319 [INFO][4425] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.5/26] block=192.168.8.0/26 handle="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.319 [INFO][4425] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.5/26] handle="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" host="172-239-192-168" Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.319 [INFO][4425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:31.350930 containerd[1557]: 2026-01-23 01:04:31.319 [INFO][4425] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.5/26] IPv6=[] ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" HandleID="k8s-pod-network.4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Workload="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.323 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e68af0d7-5f9f-4004-bfb5-105e45ad7f04", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"goldmane-666569f655-z8t58", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6800ce66f04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.323 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.5/32] ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.323 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6800ce66f04 ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.329 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.329 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e68af0d7-5f9f-4004-bfb5-105e45ad7f04", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f", Pod:"goldmane-666569f655-z8t58", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.8.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6800ce66f04", MAC:"72:61:dc:64:8a:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:31.351839 containerd[1557]: 2026-01-23 01:04:31.339 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" Namespace="calico-system" Pod="goldmane-666569f655-z8t58" WorkloadEndpoint="172--239--192--168-k8s-goldmane--666569f655--z8t58-eth0" Jan 23 01:04:31.375051 containerd[1557]: time="2026-01-23T01:04:31.375019208Z" level=info msg="connecting to shim 4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f" address="unix:///run/containerd/s/9b93955c99b5d697eb3a60f0711b4def0eb90ba866685283a1bf8d16fc57fa66" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:31.406612 systemd[1]: Started cri-containerd-4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f.scope - libcontainer container 4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f. Jan 23 01:04:31.431439 containerd[1557]: time="2026-01-23T01:04:31.431400826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb9cdd99-9nmwm,Uid:864d071c-38d9-4c87-9ba2-e5d2783e5cdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fae5e58a46c6597e5090a9e136c03c05a3191a59db972f0934c366e52d3450b\"" Jan 23 01:04:31.432816 containerd[1557]: time="2026-01-23T01:04:31.432758620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:04:31.484851 containerd[1557]: time="2026-01-23T01:04:31.484352147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z8t58,Uid:e68af0d7-5f9f-4004-bfb5-105e45ad7f04,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f93e595cf34ea7d2894d45ef5df8bd5ba19570e99be682b1806817a0806390f\"" Jan 23 01:04:31.559774 containerd[1557]: time="2026-01-23T01:04:31.559733626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:31.560730 containerd[1557]: time="2026-01-23T01:04:31.560675725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:04:31.560844 containerd[1557]: time="2026-01-23T01:04:31.560744966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:04:31.560981 kubelet[2733]: E0123 01:04:31.560914 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:04:31.561037 kubelet[2733]: E0123 01:04:31.560976 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:04:31.561231 kubelet[2733]: E0123 01:04:31.561170 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctw6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:31.561574 containerd[1557]: time="2026-01-23T01:04:31.561550465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:04:31.562919 kubelet[2733]: E0123 01:04:31.562878 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:04:31.689475 containerd[1557]: time="2026-01-23T01:04:31.689409220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:31.690202 containerd[1557]: time="2026-01-23T01:04:31.690175708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:04:31.690268 containerd[1557]: time="2026-01-23T01:04:31.690236148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:31.690448 kubelet[2733]: E0123 01:04:31.690408 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:04:31.690511 kubelet[2733]: E0123 01:04:31.690456 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:04:31.690798 kubelet[2733]: E0123 01:04:31.690621 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7pz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:31.691801 kubelet[2733]: E0123 01:04:31.691768 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:04:31.779436 systemd-networkd[1444]: vxlan.calico: Gained IPv6LL Jan 23 01:04:32.076752 kubelet[2733]: E0123 01:04:32.075714 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:32.077743 containerd[1557]: time="2026-01-23T01:04:32.077696202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-tk79b,Uid:39650212-2ffc-42da-8b29-3a9e9efdade1,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:32.078214 containerd[1557]: time="2026-01-23T01:04:32.078171686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhrfc,Uid:9961df16-6144-417e-bebd-56649ddba7b2,Namespace:kube-system,Attempt:0,}" Jan 23 01:04:32.228329 kubelet[2733]: E0123 01:04:32.227390 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:04:32.232296 kubelet[2733]: E0123 01:04:32.231702 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:32.234473 kubelet[2733]: E0123 01:04:32.234451 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:04:32.294005 systemd-networkd[1444]: calif06bdebf47e: Link UP Jan 23 01:04:32.295970 systemd-networkd[1444]: calif06bdebf47e: Gained carrier Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.152 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0 coredns-674b8bbfcf- kube-system 9961df16-6144-417e-bebd-56649ddba7b2 873 0 2026-01-23 01:03:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-192-168 coredns-674b8bbfcf-dhrfc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif06bdebf47e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.152 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.210 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" HandleID="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.210 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" HandleID="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-192-168", "pod":"coredns-674b8bbfcf-dhrfc", "timestamp":"2026-01-23 01:04:32.21078961 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.210 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.211 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.211 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.217 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.221 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.225 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.228 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.237 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.237 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.242 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8 Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.252 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.263 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.6/26] block=192.168.8.0/26 handle="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.263 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.6/26] handle="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" host="172-239-192-168" Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.264 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:32.318591 containerd[1557]: 2026-01-23 01:04:32.264 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.6/26] IPv6=[] ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" HandleID="k8s-pod-network.ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Workload="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.276 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9961df16-6144-417e-bebd-56649ddba7b2", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"coredns-674b8bbfcf-dhrfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif06bdebf47e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.276 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.6/32] ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.276 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif06bdebf47e ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.296 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.301 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9961df16-6144-417e-bebd-56649ddba7b2", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 3, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8", Pod:"coredns-674b8bbfcf-dhrfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif06bdebf47e", MAC:"52:7a:f1:f3:27:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:32.319392 containerd[1557]: 2026-01-23 01:04:32.312 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhrfc" WorkloadEndpoint="172--239--192--168-k8s-coredns--674b8bbfcf--dhrfc-eth0" Jan 23 01:04:32.364770 systemd-networkd[1444]: calid19d9462bb4: Link UP Jan 23 01:04:32.368950 systemd-networkd[1444]: calid19d9462bb4: Gained carrier Jan 23 01:04:32.371690 containerd[1557]: time="2026-01-23T01:04:32.371655523Z" level=info msg="connecting to shim ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8" address="unix:///run/containerd/s/7b723e6002cf207a0cebd25c345acaca892897403ee7eafa26549207a82a0443" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.147 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0 calico-apiserver-549f748967- calico-apiserver 39650212-2ffc-42da-8b29-3a9e9efdade1 876 0 2026-01-23 01:04:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:549f748967 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-192-168 calico-apiserver-549f748967-tk79b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid19d9462bb4 [] [] }} ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.148 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.211 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" HandleID="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.211 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" HandleID="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-192-168", "pod":"calico-apiserver-549f748967-tk79b", "timestamp":"2026-01-23 01:04:32.211510657 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.211 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.263 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.263 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.322 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.329 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.335 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.337 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.340 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.340 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.343 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.347 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.356 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.7/26] block=192.168.8.0/26 handle="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.356 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.7/26] handle="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" host="172-239-192-168" Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.356 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:32.390481 containerd[1557]: 2026-01-23 01:04:32.356 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.7/26] IPv6=[] ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" HandleID="k8s-pod-network.ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.361 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0", GenerateName:"calico-apiserver-549f748967-", Namespace:"calico-apiserver", SelfLink:"", UID:"39650212-2ffc-42da-8b29-3a9e9efdade1", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549f748967", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"calico-apiserver-549f748967-tk79b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid19d9462bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.361 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.7/32] ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.361 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid19d9462bb4 ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.369 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.371 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0", GenerateName:"calico-apiserver-549f748967-", Namespace:"calico-apiserver", SelfLink:"", UID:"39650212-2ffc-42da-8b29-3a9e9efdade1", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549f748967", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc", Pod:"calico-apiserver-549f748967-tk79b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid19d9462bb4", MAC:"8e:58:8b:b9:48:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:32.392141 containerd[1557]: 2026-01-23 01:04:32.384 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-tk79b" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--tk79b-eth0" Jan 23 01:04:32.428500 containerd[1557]: time="2026-01-23T01:04:32.428460905Z" level=info msg="connecting to shim ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc" address="unix:///run/containerd/s/913d338d43e1f84c5d2c8c469ba1faf4755d70dcc9af4c19ec2393eedd7e74ad" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:32.431497 systemd[1]: Started cri-containerd-ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8.scope - libcontainer container ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8. Jan 23 01:04:32.480393 systemd[1]: Started cri-containerd-ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc.scope - libcontainer container ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc. Jan 23 01:04:32.509425 containerd[1557]: time="2026-01-23T01:04:32.509232015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhrfc,Uid:9961df16-6144-417e-bebd-56649ddba7b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8\"" Jan 23 01:04:32.510264 kubelet[2733]: E0123 01:04:32.510235 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:32.514578 containerd[1557]: time="2026-01-23T01:04:32.514441126Z" level=info msg="CreateContainer within sandbox \"ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:04:32.524740 containerd[1557]: time="2026-01-23T01:04:32.524517196Z" level=info msg="Container 7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:04:32.528664 containerd[1557]: time="2026-01-23T01:04:32.528644427Z" level=info msg="CreateContainer within sandbox \"ff85abc1005b08d5ec11d9840c07eb059cd62267875c8c19d50c5f99438826f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444\"" Jan 23 01:04:32.529860 containerd[1557]: time="2026-01-23T01:04:32.529841158Z" level=info msg="StartContainer for \"7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444\"" Jan 23 01:04:32.531608 containerd[1557]: time="2026-01-23T01:04:32.531558636Z" level=info msg="connecting to shim 7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444" address="unix:///run/containerd/s/7b723e6002cf207a0cebd25c345acaca892897403ee7eafa26549207a82a0443" protocol=ttrpc version=3 Jan 23 01:04:32.548711 systemd-networkd[1444]: cali4a579a609a0: Gained IPv6LL Jan 23 01:04:32.557810 containerd[1557]: time="2026-01-23T01:04:32.557785315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-tk79b,Uid:39650212-2ffc-42da-8b29-3a9e9efdade1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ef5c54691b1d38b509f12f67c9109732fca84a3022545161a826d824df558cdc\"" Jan 23 01:04:32.561385 containerd[1557]: time="2026-01-23T01:04:32.561363631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:04:32.573402 systemd[1]: Started cri-containerd-7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444.scope - libcontainer container 7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444. Jan 23 01:04:32.608993 containerd[1557]: time="2026-01-23T01:04:32.608929692Z" level=info msg="StartContainer for \"7e30e458052e87379bd890a3a2c3d0ed505526aac80b089ca1a64ff972f53444\" returns successfully" Jan 23 01:04:32.711045 containerd[1557]: time="2026-01-23T01:04:32.710918112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:32.712118 containerd[1557]: time="2026-01-23T01:04:32.712061743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:04:32.712181 containerd[1557]: time="2026-01-23T01:04:32.712153164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:32.712331 kubelet[2733]: E0123 01:04:32.712304 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:32.712393 kubelet[2733]: E0123 01:04:32.712342 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:32.712510 kubelet[2733]: E0123 01:04:32.712476 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:32.713652 kubelet[2733]: E0123 01:04:32.713631 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:04:32.996069 systemd-networkd[1444]: cali6800ce66f04: Gained IPv6LL Jan 23 01:04:33.075320 containerd[1557]: time="2026-01-23T01:04:33.075178201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-5pdhf,Uid:e8fdc5e3-83f6-414b-bda4-0c1884c70d80,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:04:33.200597 systemd-networkd[1444]: cali8c0b56fb01a: Link UP Jan 23 01:04:33.202196 systemd-networkd[1444]: cali8c0b56fb01a: Gained carrier Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.117 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0 calico-apiserver-549f748967- calico-apiserver e8fdc5e3-83f6-414b-bda4-0c1884c70d80 872 0 2026-01-23 01:04:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:549f748967 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-192-168 calico-apiserver-549f748967-5pdhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c0b56fb01a [] [] }} ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.117 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.143 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" HandleID="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.144 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" HandleID="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-192-168", "pod":"calico-apiserver-549f748967-5pdhf", "timestamp":"2026-01-23 01:04:33.143868916 +0000 UTC"}, Hostname:"172-239-192-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.144 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.144 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.144 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-192-168' Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.150 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.154 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.158 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.160 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.177 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.8.0/26 host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.177 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.8.0/26 handle="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.180 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.184 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.8.0/26 handle="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.188 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.8.8/26] block=192.168.8.0/26 handle="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.188 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.8.8/26] handle="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" host="172-239-192-168" Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.188 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:04:33.216789 containerd[1557]: 2026-01-23 01:04:33.188 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.8.8/26] IPv6=[] ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" HandleID="k8s-pod-network.ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Workload="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.192 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0", GenerateName:"calico-apiserver-549f748967-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8fdc5e3-83f6-414b-bda4-0c1884c70d80", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549f748967", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"", Pod:"calico-apiserver-549f748967-5pdhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0b56fb01a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.192 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.8.8/32] ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.192 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c0b56fb01a ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.203 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.203 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0", GenerateName:"calico-apiserver-549f748967-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8fdc5e3-83f6-414b-bda4-0c1884c70d80", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 4, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"549f748967", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-192-168", ContainerID:"ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff", Pod:"calico-apiserver-549f748967-5pdhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c0b56fb01a", MAC:"e6:0b:03:b7:7c:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:04:33.217769 containerd[1557]: 2026-01-23 01:04:33.213 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" Namespace="calico-apiserver" Pod="calico-apiserver-549f748967-5pdhf" WorkloadEndpoint="172--239--192--168-k8s-calico--apiserver--549f748967--5pdhf-eth0" Jan 23 01:04:33.242548 kubelet[2733]: E0123 01:04:33.242218 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:33.254407 kubelet[2733]: E0123 01:04:33.254107 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:04:33.255492 kubelet[2733]: E0123 01:04:33.255451 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:04:33.256431 kubelet[2733]: E0123 01:04:33.255629 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:04:33.257557 containerd[1557]: time="2026-01-23T01:04:33.257527238Z" level=info msg="connecting to shim ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff" address="unix:///run/containerd/s/fa611568d832c134728a0a4c3ac6bdbe305a35ca809ff4de6a84f8f1f1091af2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:04:33.306564 kubelet[2733]: I0123 01:04:33.306354 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhrfc" podStartSLOduration=34.306327529 podStartE2EDuration="34.306327529s" podCreationTimestamp="2026-01-23 01:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:04:33.261094601 +0000 UTC m=+41.264439063" watchObservedRunningTime="2026-01-23 01:04:33.306327529 +0000 UTC m=+41.309672011" Jan 23 01:04:33.335422 systemd[1]: Started cri-containerd-ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff.scope - libcontainer container ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff. Jan 23 01:04:33.485870 containerd[1557]: time="2026-01-23T01:04:33.485820030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-549f748967-5pdhf,Uid:e8fdc5e3-83f6-414b-bda4-0c1884c70d80,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ef26df6308178ea015f629bade6867392af0a5c81b3fb6b790ae8ee58fa312ff\"" Jan 23 01:04:33.488508 containerd[1557]: time="2026-01-23T01:04:33.488483035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:04:33.571793 systemd-networkd[1444]: calif06bdebf47e: Gained IPv6LL Jan 23 01:04:33.638120 containerd[1557]: time="2026-01-23T01:04:33.638074119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:33.638937 containerd[1557]: time="2026-01-23T01:04:33.638894087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:04:33.639063 containerd[1557]: time="2026-01-23T01:04:33.638974347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:33.639149 kubelet[2733]: E0123 01:04:33.639105 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:33.639230 kubelet[2733]: E0123 01:04:33.639157 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:33.639416 kubelet[2733]: E0123 01:04:33.639339 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgwnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:33.640817 kubelet[2733]: E0123 01:04:33.640771 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:04:33.891468 systemd-networkd[1444]: calid19d9462bb4: Gained IPv6LL Jan 23 01:04:34.249687 kubelet[2733]: E0123 01:04:34.249576 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:34.250547 kubelet[2733]: E0123 01:04:34.250519 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:04:34.250667 kubelet[2733]: E0123 01:04:34.250612 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:04:34.979841 systemd-networkd[1444]: cali8c0b56fb01a: Gained IPv6LL Jan 23 01:04:35.252461 kubelet[2733]: E0123 01:04:35.252422 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:35.253585 kubelet[2733]: E0123 01:04:35.253551 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:04:38.075650 containerd[1557]: time="2026-01-23T01:04:38.075407338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:04:38.211571 containerd[1557]: time="2026-01-23T01:04:38.211515641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:38.212382 containerd[1557]: time="2026-01-23T01:04:38.212355317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:04:38.212471 containerd[1557]: time="2026-01-23T01:04:38.212411376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:04:38.212545 kubelet[2733]: E0123 01:04:38.212506 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:04:38.212844 kubelet[2733]: E0123 01:04:38.212544 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:04:38.212844 kubelet[2733]: E0123 01:04:38.212633 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3a4f6b1e42494c1faf3d8022f0ad3fee,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:38.216164 containerd[1557]: time="2026-01-23T01:04:38.216134901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:04:38.363301 containerd[1557]: time="2026-01-23T01:04:38.362970675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:38.364231 containerd[1557]: time="2026-01-23T01:04:38.364182963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:04:38.364302 containerd[1557]: time="2026-01-23T01:04:38.364241224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:04:38.364407 kubelet[2733]: E0123 01:04:38.364369 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:04:38.364447 kubelet[2733]: E0123 01:04:38.364406 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:04:38.364553 kubelet[2733]: E0123 01:04:38.364522 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:38.365947 kubelet[2733]: E0123 01:04:38.365902 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:04:42.077335 containerd[1557]: time="2026-01-23T01:04:42.077183667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:04:42.214654 containerd[1557]: time="2026-01-23T01:04:42.214607877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:42.215801 containerd[1557]: time="2026-01-23T01:04:42.215747453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:04:42.216207 containerd[1557]: time="2026-01-23T01:04:42.216088194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:04:42.216563 kubelet[2733]: E0123 01:04:42.216381 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:04:42.216563 kubelet[2733]: E0123 01:04:42.216457 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:04:42.220862 kubelet[2733]: E0123 01:04:42.217115 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:42.221222 containerd[1557]: time="2026-01-23T01:04:42.219068460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:04:42.497374 containerd[1557]: time="2026-01-23T01:04:42.497225538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:42.498721 containerd[1557]: time="2026-01-23T01:04:42.498646395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:04:42.498929 containerd[1557]: time="2026-01-23T01:04:42.498747325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:04:42.498969 kubelet[2733]: E0123 01:04:42.498902 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:04:42.498969 kubelet[2733]: E0123 01:04:42.498958 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:04:42.499593 kubelet[2733]: E0123 01:04:42.499080 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:42.500359 kubelet[2733]: E0123 01:04:42.500324 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:45.075737 containerd[1557]: time="2026-01-23T01:04:45.075455980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:04:45.234437 containerd[1557]: time="2026-01-23T01:04:45.234340196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:45.235811 containerd[1557]: time="2026-01-23T01:04:45.235669621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:04:45.235811 containerd[1557]: time="2026-01-23T01:04:45.235776491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:04:45.236080 kubelet[2733]: E0123 01:04:45.236027 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:04:45.236527 kubelet[2733]: E0123 01:04:45.236480 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:04:45.236882 kubelet[2733]: E0123 01:04:45.236634 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctw6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:45.240719 kubelet[2733]: E0123 01:04:45.240641 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:04:48.075806 containerd[1557]: time="2026-01-23T01:04:48.075758633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:04:48.206090 containerd[1557]: time="2026-01-23T01:04:48.206023281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:48.207258 containerd[1557]: time="2026-01-23T01:04:48.207201694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:04:48.207428 containerd[1557]: time="2026-01-23T01:04:48.207351985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:48.207594 kubelet[2733]: E0123 01:04:48.207541 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:48.207907 kubelet[2733]: E0123 01:04:48.207614 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:48.209531 kubelet[2733]: E0123 01:04:48.208295 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgwnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:48.209640 containerd[1557]: time="2026-01-23T01:04:48.209267371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:04:48.210255 kubelet[2733]: E0123 01:04:48.210208 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:04:48.353265 containerd[1557]: time="2026-01-23T01:04:48.353117882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:48.354243 containerd[1557]: time="2026-01-23T01:04:48.354210905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:04:48.354355 containerd[1557]: time="2026-01-23T01:04:48.354304295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:48.354497 kubelet[2733]: E0123 01:04:48.354452 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:48.354540 kubelet[2733]: E0123 01:04:48.354513 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:04:48.354766 kubelet[2733]: E0123 01:04:48.354714 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:48.355413 containerd[1557]: time="2026-01-23T01:04:48.355385089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:04:48.356097 kubelet[2733]: E0123 01:04:48.356013 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:04:48.485938 containerd[1557]: time="2026-01-23T01:04:48.485866898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:04:48.487226 containerd[1557]: time="2026-01-23T01:04:48.487193711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:04:48.487474 containerd[1557]: time="2026-01-23T01:04:48.487293182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:04:48.487544 kubelet[2733]: E0123 01:04:48.487502 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:04:48.487641 kubelet[2733]: E0123 01:04:48.487556 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:04:48.489346 kubelet[2733]: E0123 01:04:48.489294 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7pz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:04:48.490502 kubelet[2733]: E0123 01:04:48.490453 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:04:49.076934 kubelet[2733]: E0123 01:04:49.076849 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:04:54.077118 kubelet[2733]: E0123 01:04:54.076779 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:04:54.277903 kubelet[2733]: E0123 01:04:54.277668 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:04:59.077039 kubelet[2733]: E0123 01:04:59.076675 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:05:00.076534 kubelet[2733]: E0123 01:05:00.076422 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:05:01.075587 kubelet[2733]: E0123 01:05:01.075262 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:05:02.076503 kubelet[2733]: E0123 01:05:02.076467 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:03.076194 containerd[1557]: time="2026-01-23T01:05:03.076126419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:05:03.204921 containerd[1557]: time="2026-01-23T01:05:03.204871510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:03.205916 containerd[1557]: time="2026-01-23T01:05:03.205887752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:05:03.206178 containerd[1557]: time="2026-01-23T01:05:03.206143157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:05:03.206347 kubelet[2733]: E0123 01:05:03.206308 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:03.206689 kubelet[2733]: E0123 01:05:03.206357 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:03.207301 kubelet[2733]: E0123 01:05:03.206468 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3a4f6b1e42494c1faf3d8022f0ad3fee,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:03.210004 containerd[1557]: time="2026-01-23T01:05:03.209977266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:05:03.333047 containerd[1557]: time="2026-01-23T01:05:03.332909263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:03.334458 containerd[1557]: time="2026-01-23T01:05:03.334424415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:05:03.334553 containerd[1557]: time="2026-01-23T01:05:03.334504980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:03.334750 kubelet[2733]: E0123 01:05:03.334717 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:03.334803 kubelet[2733]: E0123 01:05:03.334760 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:03.335286 kubelet[2733]: E0123 01:05:03.334905 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:03.336871 kubelet[2733]: E0123 01:05:03.336343 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:05:04.076805 kubelet[2733]: E0123 01:05:04.076459 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:05:06.078135 containerd[1557]: time="2026-01-23T01:05:06.077936341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:05:06.213351 containerd[1557]: time="2026-01-23T01:05:06.213264299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:06.214225 containerd[1557]: time="2026-01-23T01:05:06.214186590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:05:06.214367 containerd[1557]: time="2026-01-23T01:05:06.214214928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:05:06.214539 kubelet[2733]: E0123 01:05:06.214484 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:06.214539 kubelet[2733]: E0123 01:05:06.214533 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:06.216163 kubelet[2733]: E0123 01:05:06.214647 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:06.217716 containerd[1557]: time="2026-01-23T01:05:06.217685013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:05:06.341330 containerd[1557]: time="2026-01-23T01:05:06.340211321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:06.341330 containerd[1557]: time="2026-01-23T01:05:06.341185090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:05:06.341330 containerd[1557]: time="2026-01-23T01:05:06.341265496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:05:06.341762 kubelet[2733]: E0123 01:05:06.341562 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:06.341762 kubelet[2733]: E0123 01:05:06.341603 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:06.341762 kubelet[2733]: E0123 01:05:06.341712 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:06.343096 kubelet[2733]: E0123 01:05:06.343041 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:05:12.075475 kubelet[2733]: E0123 01:05:12.075295 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:12.079181 containerd[1557]: time="2026-01-23T01:05:12.079153291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:12.220308 containerd[1557]: time="2026-01-23T01:05:12.220213445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:12.221708 containerd[1557]: time="2026-01-23T01:05:12.221329386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:12.221919 containerd[1557]: time="2026-01-23T01:05:12.221682940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:12.221979 kubelet[2733]: E0123 01:05:12.221928 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:12.222054 kubelet[2733]: E0123 01:05:12.221981 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:12.222298 kubelet[2733]: E0123 01:05:12.222113 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgwnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:12.223631 kubelet[2733]: E0123 01:05:12.223588 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:05:13.076677 containerd[1557]: time="2026-01-23T01:05:13.076631063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:05:13.213510 containerd[1557]: time="2026-01-23T01:05:13.213470643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:13.214788 containerd[1557]: time="2026-01-23T01:05:13.214554105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:05:13.214788 containerd[1557]: time="2026-01-23T01:05:13.214622672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:13.214868 kubelet[2733]: E0123 01:05:13.214747 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:13.214868 kubelet[2733]: E0123 01:05:13.214791 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:05:13.215241 kubelet[2733]: E0123 01:05:13.214978 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:13.216514 kubelet[2733]: E0123 01:05:13.216408 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:05:13.216858 containerd[1557]: time="2026-01-23T01:05:13.216757748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:05:13.360786 containerd[1557]: time="2026-01-23T01:05:13.360640698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:13.362617 containerd[1557]: time="2026-01-23T01:05:13.362578333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:05:13.362733 containerd[1557]: time="2026-01-23T01:05:13.362651859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:13.362808 kubelet[2733]: E0123 01:05:13.362763 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:13.362843 kubelet[2733]: E0123 01:05:13.362815 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:05:13.362964 kubelet[2733]: E0123 01:05:13.362918 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctw6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:13.364330 kubelet[2733]: E0123 01:05:13.364298 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:05:16.076847 containerd[1557]: time="2026-01-23T01:05:16.076792061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:05:16.219655 containerd[1557]: time="2026-01-23T01:05:16.219420166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:16.220768 containerd[1557]: time="2026-01-23T01:05:16.220542091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:05:16.220868 containerd[1557]: time="2026-01-23T01:05:16.220803200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:05:16.221453 kubelet[2733]: E0123 01:05:16.221414 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:16.222006 kubelet[2733]: E0123 01:05:16.221462 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:05:16.222006 kubelet[2733]: E0123 01:05:16.221807 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7pz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:16.223293 kubelet[2733]: E0123 01:05:16.223249 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:05:18.076522 kubelet[2733]: E0123 01:05:18.076389 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:05:18.076522 kubelet[2733]: E0123 01:05:18.076483 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:05:21.076585 kubelet[2733]: E0123 01:05:21.076516 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:24.076717 kubelet[2733]: E0123 01:05:24.076563 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:05:25.076296 kubelet[2733]: E0123 01:05:25.076175 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:05:25.081607 kubelet[2733]: E0123 01:05:25.081547 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:05:27.075471 kubelet[2733]: E0123 01:05:27.075413 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:27.077469 kubelet[2733]: E0123 01:05:27.077190 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:05:31.079719 kubelet[2733]: E0123 01:05:31.078759 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:05:33.078601 kubelet[2733]: E0123 01:05:33.078496 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:05:34.078329 kubelet[2733]: E0123 01:05:34.076551 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:38.086244 kubelet[2733]: E0123 01:05:38.084999 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:05:39.076481 kubelet[2733]: E0123 01:05:39.076442 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:05:39.076481 kubelet[2733]: E0123 01:05:39.076735 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:05:39.078647 kubelet[2733]: E0123 01:05:39.077635 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:05:41.074830 kubelet[2733]: E0123 01:05:41.074761 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:05:44.075761 containerd[1557]: time="2026-01-23T01:05:44.075707764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:05:44.528165 containerd[1557]: time="2026-01-23T01:05:44.528089692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:44.529315 containerd[1557]: time="2026-01-23T01:05:44.529290777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:05:44.529632 containerd[1557]: time="2026-01-23T01:05:44.529577901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:05:44.529933 kubelet[2733]: E0123 01:05:44.529891 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:44.530266 kubelet[2733]: E0123 01:05:44.529944 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:05:44.530266 kubelet[2733]: E0123 01:05:44.530049 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3a4f6b1e42494c1faf3d8022f0ad3fee,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:44.532922 containerd[1557]: time="2026-01-23T01:05:44.532890423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:05:44.656405 containerd[1557]: time="2026-01-23T01:05:44.656356126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:44.657417 containerd[1557]: time="2026-01-23T01:05:44.657384264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:05:44.657470 containerd[1557]: time="2026-01-23T01:05:44.657460232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:05:44.657671 kubelet[2733]: E0123 01:05:44.657604 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:44.657671 kubelet[2733]: E0123 01:05:44.657652 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:05:44.658243 kubelet[2733]: E0123 01:05:44.657886 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9cdd954b6-lp7jl_calico-system(c3bfb933-e96b-46ea-8ee6-2c44dc35b631): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:44.659408 kubelet[2733]: E0123 01:05:44.659359 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:05:46.076730 kubelet[2733]: E0123 01:05:46.076595 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:05:50.078764 kubelet[2733]: E0123 01:05:50.078163 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:05:50.078764 kubelet[2733]: E0123 01:05:50.078544 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:05:51.075175 kubelet[2733]: E0123 01:05:51.075125 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:05:52.079822 kubelet[2733]: E0123 01:05:52.079535 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:05:57.077582 containerd[1557]: time="2026-01-23T01:05:57.077497269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:05:57.083447 kubelet[2733]: E0123 01:05:57.083367 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:05:57.225765 containerd[1557]: time="2026-01-23T01:05:57.225590695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:57.226815 containerd[1557]: time="2026-01-23T01:05:57.226744616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:05:57.226989 containerd[1557]: time="2026-01-23T01:05:57.226804135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:05:57.227302 kubelet[2733]: E0123 01:05:57.227183 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:57.227547 kubelet[2733]: E0123 01:05:57.227264 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:05:57.227547 kubelet[2733]: E0123 01:05:57.227493 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:57.231177 containerd[1557]: time="2026-01-23T01:05:57.231144906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:05:57.358955 containerd[1557]: time="2026-01-23T01:05:57.358755780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:05:57.360124 containerd[1557]: time="2026-01-23T01:05:57.360035719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:05:57.360286 containerd[1557]: time="2026-01-23T01:05:57.360212386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:05:57.360744 kubelet[2733]: E0123 01:05:57.360648 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:57.360813 kubelet[2733]: E0123 01:05:57.360752 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:05:57.361393 kubelet[2733]: E0123 01:05:57.361321 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kph75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vbcl_calico-system(16cb6344-7ecd-43dd-aa88-18d498591102): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:05:57.362501 kubelet[2733]: E0123 01:05:57.362461 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:05:59.074572 kubelet[2733]: E0123 01:05:59.074534 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:00.661476 systemd[1]: Started sshd@7-172.239.192.168:22-68.220.241.50:42412.service - OpenSSH per-connection server daemon (68.220.241.50:42412). Jan 23 01:06:00.837666 sshd[4948]: Accepted publickey for core from 68.220.241.50 port 42412 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:00.840188 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:00.847347 systemd-logind[1535]: New session 8 of user core. Jan 23 01:06:00.853386 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:06:01.063663 sshd[4951]: Connection closed by 68.220.241.50 port 42412 Jan 23 01:06:01.066020 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:01.070341 systemd[1]: sshd@7-172.239.192.168:22-68.220.241.50:42412.service: Deactivated successfully. Jan 23 01:06:01.070830 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:06:01.073489 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:06:01.078222 systemd-logind[1535]: Removed session 8. Jan 23 01:06:03.077864 containerd[1557]: time="2026-01-23T01:06:03.077644773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:06:03.214493 containerd[1557]: time="2026-01-23T01:06:03.214445666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:03.215635 containerd[1557]: time="2026-01-23T01:06:03.215591890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:06:03.215732 containerd[1557]: time="2026-01-23T01:06:03.215653920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:06:03.215812 kubelet[2733]: E0123 01:06:03.215779 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:03.216375 kubelet[2733]: E0123 01:06:03.215820 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:06:03.216375 kubelet[2733]: E0123 01:06:03.215980 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctw6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68bb9cdd99-9nmwm_calico-system(864d071c-38d9-4c87-9ba2-e5d2783e5cdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:03.217328 containerd[1557]: time="2026-01-23T01:06:03.216544826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:03.217840 kubelet[2733]: E0123 01:06:03.217809 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:06:03.341556 containerd[1557]: time="2026-01-23T01:06:03.341406863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:03.343086 containerd[1557]: time="2026-01-23T01:06:03.342794663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:03.343234 containerd[1557]: time="2026-01-23T01:06:03.342847962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:03.343616 kubelet[2733]: E0123 01:06:03.343511 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:03.343759 kubelet[2733]: E0123 01:06:03.343559 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:03.344314 kubelet[2733]: E0123 01:06:03.344253 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgwnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-5pdhf_calico-apiserver(e8fdc5e3-83f6-414b-bda4-0c1884c70d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:03.346126 kubelet[2733]: E0123 01:06:03.346098 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:06:05.077297 containerd[1557]: time="2026-01-23T01:06:05.076877583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:06:05.205513 containerd[1557]: time="2026-01-23T01:06:05.205433197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:05.206994 containerd[1557]: time="2026-01-23T01:06:05.206919406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:06:05.206994 containerd[1557]: time="2026-01-23T01:06:05.206966715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:05.207249 kubelet[2733]: E0123 01:06:05.207179 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:05.207585 kubelet[2733]: E0123 01:06:05.207259 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:06:05.207585 kubelet[2733]: E0123 01:06:05.207381 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7r2xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-549f748967-tk79b_calico-apiserver(39650212-2ffc-42da-8b29-3a9e9efdade1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:05.208863 kubelet[2733]: E0123 01:06:05.208809 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:06:06.077940 containerd[1557]: time="2026-01-23T01:06:06.077881261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:06:06.103381 systemd[1]: Started sshd@8-172.239.192.168:22-68.220.241.50:52022.service - OpenSSH per-connection server daemon (68.220.241.50:52022). Jan 23 01:06:06.212902 containerd[1557]: time="2026-01-23T01:06:06.212849165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:06:06.213743 containerd[1557]: time="2026-01-23T01:06:06.213708423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:06:06.213799 containerd[1557]: time="2026-01-23T01:06:06.213777812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:06:06.213971 kubelet[2733]: E0123 01:06:06.213933 2733 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:06.214268 kubelet[2733]: E0123 01:06:06.213972 2733 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:06:06.214268 kubelet[2733]: E0123 01:06:06.214114 2733 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7pz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z8t58_calico-system(e68af0d7-5f9f-4004-bfb5-105e45ad7f04): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:06:06.215432 kubelet[2733]: E0123 01:06:06.215397 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:06:06.297193 sshd[4985]: Accepted publickey for core from 68.220.241.50 port 52022 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:06.299050 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:06.305490 systemd-logind[1535]: New session 9 of user core. Jan 23 01:06:06.312395 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:06:06.510524 sshd[4988]: Connection closed by 68.220.241.50 port 52022 Jan 23 01:06:06.511874 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:06.517263 systemd[1]: sshd@8-172.239.192.168:22-68.220.241.50:52022.service: Deactivated successfully. Jan 23 01:06:06.521814 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:06:06.528415 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:06:06.530734 systemd-logind[1535]: Removed session 9. Jan 23 01:06:08.075077 kubelet[2733]: E0123 01:06:08.074602 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:11.076852 kubelet[2733]: E0123 01:06:11.076748 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:06:11.543663 systemd[1]: Started sshd@9-172.239.192.168:22-68.220.241.50:52036.service - OpenSSH per-connection server daemon (68.220.241.50:52036). Jan 23 01:06:11.713371 sshd[5000]: Accepted publickey for core from 68.220.241.50 port 52036 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:11.714287 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:11.719847 systemd-logind[1535]: New session 10 of user core. Jan 23 01:06:11.725403 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:06:11.897539 sshd[5003]: Connection closed by 68.220.241.50 port 52036 Jan 23 01:06:11.898458 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:11.903972 systemd[1]: sshd@9-172.239.192.168:22-68.220.241.50:52036.service: Deactivated successfully. Jan 23 01:06:11.904703 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:06:11.908031 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:06:11.910964 systemd-logind[1535]: Removed session 10. Jan 23 01:06:11.938655 systemd[1]: Started sshd@10-172.239.192.168:22-68.220.241.50:52046.service - OpenSSH per-connection server daemon (68.220.241.50:52046). Jan 23 01:06:12.078290 kubelet[2733]: E0123 01:06:12.078074 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:06:12.103415 sshd[5016]: Accepted publickey for core from 68.220.241.50 port 52046 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:12.105652 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:12.111013 systemd-logind[1535]: New session 11 of user core. Jan 23 01:06:12.117597 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:06:12.329342 sshd[5019]: Connection closed by 68.220.241.50 port 52046 Jan 23 01:06:12.330076 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:12.335189 systemd[1]: sshd@10-172.239.192.168:22-68.220.241.50:52046.service: Deactivated successfully. Jan 23 01:06:12.336336 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:06:12.339906 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:06:12.344380 systemd-logind[1535]: Removed session 11. Jan 23 01:06:12.363453 systemd[1]: Started sshd@11-172.239.192.168:22-68.220.241.50:52048.service - OpenSSH per-connection server daemon (68.220.241.50:52048). Jan 23 01:06:12.543329 sshd[5029]: Accepted publickey for core from 68.220.241.50 port 52048 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:12.544501 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:12.551033 systemd-logind[1535]: New session 12 of user core. Jan 23 01:06:12.561139 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:06:12.790187 sshd[5036]: Connection closed by 68.220.241.50 port 52048 Jan 23 01:06:12.792538 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:12.797703 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:06:12.798884 systemd[1]: sshd@11-172.239.192.168:22-68.220.241.50:52048.service: Deactivated successfully. Jan 23 01:06:12.804007 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:06:12.809034 systemd-logind[1535]: Removed session 12. Jan 23 01:06:17.823421 systemd[1]: Started sshd@12-172.239.192.168:22-68.220.241.50:39754.service - OpenSSH per-connection server daemon (68.220.241.50:39754). Jan 23 01:06:17.989068 sshd[5049]: Accepted publickey for core from 68.220.241.50 port 39754 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:17.991536 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:17.999409 systemd-logind[1535]: New session 13 of user core. Jan 23 01:06:18.004586 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:06:18.077845 kubelet[2733]: E0123 01:06:18.076811 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:06:18.078959 kubelet[2733]: E0123 01:06:18.078927 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:06:18.201221 sshd[5052]: Connection closed by 68.220.241.50 port 39754 Jan 23 01:06:18.203425 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:18.207672 systemd[1]: sshd@12-172.239.192.168:22-68.220.241.50:39754.service: Deactivated successfully. Jan 23 01:06:18.210066 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:06:18.211351 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:06:18.212950 systemd-logind[1535]: Removed session 13. Jan 23 01:06:18.241458 systemd[1]: Started sshd@13-172.239.192.168:22-68.220.241.50:39758.service - OpenSSH per-connection server daemon (68.220.241.50:39758). Jan 23 01:06:18.415158 sshd[5064]: Accepted publickey for core from 68.220.241.50 port 39758 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:18.417698 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:18.425645 systemd-logind[1535]: New session 14 of user core. Jan 23 01:06:18.432547 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:06:18.762296 sshd[5067]: Connection closed by 68.220.241.50 port 39758 Jan 23 01:06:18.762324 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:18.767071 systemd[1]: sshd@13-172.239.192.168:22-68.220.241.50:39758.service: Deactivated successfully. Jan 23 01:06:18.768506 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:06:18.770687 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:06:18.775678 systemd-logind[1535]: Removed session 14. Jan 23 01:06:18.791686 systemd[1]: Started sshd@14-172.239.192.168:22-68.220.241.50:39764.service - OpenSSH per-connection server daemon (68.220.241.50:39764). Jan 23 01:06:18.961125 sshd[5077]: Accepted publickey for core from 68.220.241.50 port 39764 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:18.962410 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:18.968623 systemd-logind[1535]: New session 15 of user core. Jan 23 01:06:18.974404 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:06:19.076925 kubelet[2733]: E0123 01:06:19.076598 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:06:19.634055 sshd[5080]: Connection closed by 68.220.241.50 port 39764 Jan 23 01:06:19.634868 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:19.642379 systemd[1]: sshd@14-172.239.192.168:22-68.220.241.50:39764.service: Deactivated successfully. Jan 23 01:06:19.642722 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:06:19.647392 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:06:19.651608 systemd-logind[1535]: Removed session 15. Jan 23 01:06:19.667488 systemd[1]: Started sshd@15-172.239.192.168:22-68.220.241.50:39778.service - OpenSSH per-connection server daemon (68.220.241.50:39778). Jan 23 01:06:19.832740 sshd[5098]: Accepted publickey for core from 68.220.241.50 port 39778 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:19.834649 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:19.839509 systemd-logind[1535]: New session 16 of user core. Jan 23 01:06:19.852397 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:06:20.075463 kubelet[2733]: E0123 01:06:20.075421 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:06:20.180243 sshd[5101]: Connection closed by 68.220.241.50 port 39778 Jan 23 01:06:20.181745 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:20.186936 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:06:20.187632 systemd[1]: sshd@15-172.239.192.168:22-68.220.241.50:39778.service: Deactivated successfully. Jan 23 01:06:20.192072 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:06:20.196820 systemd-logind[1535]: Removed session 16. Jan 23 01:06:20.217068 systemd[1]: Started sshd@16-172.239.192.168:22-68.220.241.50:39786.service - OpenSSH per-connection server daemon (68.220.241.50:39786). Jan 23 01:06:20.390214 sshd[5111]: Accepted publickey for core from 68.220.241.50 port 39786 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:20.392010 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:20.400471 systemd-logind[1535]: New session 17 of user core. Jan 23 01:06:20.406437 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:06:20.593293 sshd[5114]: Connection closed by 68.220.241.50 port 39786 Jan 23 01:06:20.593840 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:20.598513 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:06:20.599496 systemd[1]: sshd@16-172.239.192.168:22-68.220.241.50:39786.service: Deactivated successfully. Jan 23 01:06:20.603933 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:06:20.607902 systemd-logind[1535]: Removed session 17. Jan 23 01:06:23.080078 kubelet[2733]: E0123 01:06:23.079905 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:06:25.621527 systemd[1]: Started sshd@17-172.239.192.168:22-68.220.241.50:60118.service - OpenSSH per-connection server daemon (68.220.241.50:60118). Jan 23 01:06:25.780310 sshd[5153]: Accepted publickey for core from 68.220.241.50 port 60118 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:25.781254 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:25.785740 systemd-logind[1535]: New session 18 of user core. Jan 23 01:06:25.792402 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:06:25.965081 sshd[5156]: Connection closed by 68.220.241.50 port 60118 Jan 23 01:06:25.966775 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:25.971477 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:06:25.972815 systemd[1]: sshd@17-172.239.192.168:22-68.220.241.50:60118.service: Deactivated successfully. Jan 23 01:06:25.975915 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:06:25.977778 systemd-logind[1535]: Removed session 18. Jan 23 01:06:27.077383 kubelet[2733]: E0123 01:06:27.077258 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:06:28.074926 kubelet[2733]: E0123 01:06:28.074509 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:30.077291 kubelet[2733]: E0123 01:06:30.076583 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:30.078408 kubelet[2733]: E0123 01:06:30.078379 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:06:31.002032 systemd[1]: Started sshd@18-172.239.192.168:22-68.220.241.50:60126.service - OpenSSH per-connection server daemon (68.220.241.50:60126). Jan 23 01:06:31.196301 sshd[5170]: Accepted publickey for core from 68.220.241.50 port 60126 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:31.196574 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:31.205345 systemd-logind[1535]: New session 19 of user core. Jan 23 01:06:31.213471 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:06:31.405705 sshd[5173]: Connection closed by 68.220.241.50 port 60126 Jan 23 01:06:31.406576 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:31.413720 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:06:31.414699 systemd[1]: sshd@18-172.239.192.168:22-68.220.241.50:60126.service: Deactivated successfully. Jan 23 01:06:31.418638 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:06:31.423057 systemd-logind[1535]: Removed session 19. Jan 23 01:06:32.076074 kubelet[2733]: E0123 01:06:32.076020 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:06:32.077434 kubelet[2733]: E0123 01:06:32.077388 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:06:33.075973 kubelet[2733]: E0123 01:06:33.075676 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:06:36.441435 systemd[1]: Started sshd@19-172.239.192.168:22-68.220.241.50:38976.service - OpenSSH per-connection server daemon (68.220.241.50:38976). Jan 23 01:06:36.635449 sshd[5185]: Accepted publickey for core from 68.220.241.50 port 38976 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:36.637180 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:36.643446 systemd-logind[1535]: New session 20 of user core. Jan 23 01:06:36.649037 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:06:36.838039 sshd[5188]: Connection closed by 68.220.241.50 port 38976 Jan 23 01:06:36.839452 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:36.847565 systemd[1]: sshd@19-172.239.192.168:22-68.220.241.50:38976.service: Deactivated successfully. Jan 23 01:06:36.850860 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:06:36.852834 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:06:36.854639 systemd-logind[1535]: Removed session 20. Jan 23 01:06:38.076971 kubelet[2733]: E0123 01:06:38.076876 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9cdd954b6-lp7jl" podUID="c3bfb933-e96b-46ea-8ee6-2c44dc35b631" Jan 23 01:06:40.077488 kubelet[2733]: E0123 01:06:40.076738 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vbcl" podUID="16cb6344-7ecd-43dd-aa88-18d498591102" Jan 23 01:06:41.870323 systemd[1]: Started sshd@20-172.239.192.168:22-68.220.241.50:38984.service - OpenSSH per-connection server daemon (68.220.241.50:38984). Jan 23 01:06:42.034710 sshd[5199]: Accepted publickey for core from 68.220.241.50 port 38984 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:42.036294 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:42.042077 systemd-logind[1535]: New session 21 of user core. Jan 23 01:06:42.049392 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:06:42.225541 sshd[5202]: Connection closed by 68.220.241.50 port 38984 Jan 23 01:06:42.226216 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:42.231246 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:06:42.232005 systemd[1]: sshd@20-172.239.192.168:22-68.220.241.50:38984.service: Deactivated successfully. Jan 23 01:06:42.234799 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:06:42.236793 systemd-logind[1535]: Removed session 21. Jan 23 01:06:43.075795 kubelet[2733]: E0123 01:06:43.075706 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68bb9cdd99-9nmwm" podUID="864d071c-38d9-4c87-9ba2-e5d2783e5cdc" Jan 23 01:06:44.075953 kubelet[2733]: E0123 01:06:44.075907 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-5pdhf" podUID="e8fdc5e3-83f6-414b-bda4-0c1884c70d80" Jan 23 01:06:45.074656 kubelet[2733]: E0123 01:06:45.074615 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:46.076283 kubelet[2733]: E0123 01:06:46.076231 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Jan 23 01:06:47.075371 kubelet[2733]: E0123 01:06:47.075319 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-549f748967-tk79b" podUID="39650212-2ffc-42da-8b29-3a9e9efdade1" Jan 23 01:06:47.076177 kubelet[2733]: E0123 01:06:47.076153 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z8t58" podUID="e68af0d7-5f9f-4004-bfb5-105e45ad7f04" Jan 23 01:06:47.259942 systemd[1]: Started sshd@21-172.239.192.168:22-68.220.241.50:40508.service - OpenSSH per-connection server daemon (68.220.241.50:40508). Jan 23 01:06:47.427466 sshd[5214]: Accepted publickey for core from 68.220.241.50 port 40508 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:06:47.429339 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:47.434761 systemd-logind[1535]: New session 22 of user core. Jan 23 01:06:47.438410 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:06:47.689388 sshd[5217]: Connection closed by 68.220.241.50 port 40508 Jan 23 01:06:47.690360 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:47.698842 systemd[1]: sshd@21-172.239.192.168:22-68.220.241.50:40508.service: Deactivated successfully. Jan 23 01:06:47.703245 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:06:47.706452 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:06:47.708722 systemd-logind[1535]: Removed session 22.