Dec 16 13:16:25.930122 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:16:25.930146 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:16:25.930154 kernel: BIOS-provided physical RAM map: Dec 16 13:16:25.930161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 16 13:16:25.930166 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 16 13:16:25.930172 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:16:25.930181 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 16 13:16:25.930187 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 16 13:16:25.930193 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 13:16:25.930199 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 13:16:25.930205 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:16:25.930211 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:16:25.930217 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 16 13:16:25.930223 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:16:25.930232 kernel: NX (Execute Disable) protection: active Dec 16 13:16:25.930238 kernel: APIC: Static calls initialized Dec 16 13:16:25.930244 kernel: SMBIOS 2.8 present. Dec 16 13:16:25.930251 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 16 13:16:25.930257 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:16:25.930263 kernel: Hypervisor detected: KVM Dec 16 13:16:25.930271 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:16:25.930277 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:16:25.930283 kernel: kvm-clock: using sched offset of 6992401390 cycles Dec 16 13:16:25.930290 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:16:25.930297 kernel: tsc: Detected 2000.000 MHz processor Dec 16 13:16:25.930303 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:16:25.930310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:16:25.930316 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 16 13:16:25.930323 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:16:25.930330 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:16:25.930338 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:16:25.930344 kernel: Using GB pages for direct mapping Dec 16 13:16:25.930351 kernel: ACPI: Early table checksum verification disabled Dec 16 13:16:25.930357 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 16 13:16:25.930363 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930370 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930376 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930382 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 13:16:25.930389 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930398 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930408 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930415 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:16:25.930435 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 16 13:16:25.930442 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 16 13:16:25.930452 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 13:16:25.930458 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 16 13:16:25.930465 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 16 13:16:25.930472 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 16 13:16:25.930478 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 16 13:16:25.930485 kernel: No NUMA configuration found Dec 16 13:16:25.930491 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 16 13:16:25.930498 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 16 13:16:25.930504 kernel: Zone ranges: Dec 16 13:16:25.930514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:16:25.930520 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:16:25.930527 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:16:25.930533 kernel: Device empty Dec 16 13:16:25.930540 kernel: Movable zone start for each node Dec 16 13:16:25.930546 kernel: Early memory node ranges Dec 16 13:16:25.930553 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:16:25.930559 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 16 13:16:25.930566 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:16:25.930573 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 16 13:16:25.930582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:16:25.930588 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:16:25.930595 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 16 13:16:25.930601 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:16:25.930608 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:16:25.930615 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:16:25.930621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:16:25.930628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:16:25.930635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:16:25.930644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:16:25.930650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:16:25.930657 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:16:25.930663 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:16:25.930670 kernel: TSC deadline timer available Dec 16 13:16:25.930676 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:16:25.930683 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:16:25.930690 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:16:25.930696 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:16:25.930705 kernel: CPU topo: Num. cores per package: 2 Dec 16 13:16:25.930712 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:16:25.930718 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:16:25.930725 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:16:25.930731 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:16:25.930738 kernel: kvm-guest: setup PV sched yield Dec 16 13:16:25.930745 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 13:16:25.930751 kernel: Booting paravirtualized kernel on KVM Dec 16 13:16:25.930758 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:16:25.930767 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:16:25.930774 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:16:25.930780 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:16:25.930787 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:16:25.930793 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:16:25.930800 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:16:25.930807 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:16:25.930814 kernel: random: crng init done Dec 16 13:16:25.930823 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:16:25.930830 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:16:25.930837 kernel: Fallback order for Node 0: 0 Dec 16 13:16:25.930843 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 16 13:16:25.930850 kernel: Policy zone: Normal Dec 16 13:16:25.930857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:16:25.930863 kernel: software IO TLB: area num 2. Dec 16 13:16:25.930870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:16:25.930876 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:16:25.930885 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:16:25.930892 kernel: Dynamic Preempt: voluntary Dec 16 13:16:25.930899 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:16:25.931752 kernel: rcu: RCU event tracing is enabled. Dec 16 13:16:25.931761 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:16:25.931768 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:16:25.931775 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:16:25.931781 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:16:25.931788 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:16:25.931795 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:16:25.931806 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:16:25.931820 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:16:25.931829 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:16:25.931836 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:16:25.931843 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:16:25.931850 kernel: Console: colour VGA+ 80x25 Dec 16 13:16:25.931857 kernel: printk: legacy console [tty0] enabled Dec 16 13:16:25.931864 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:16:25.931871 kernel: ACPI: Core revision 20240827 Dec 16 13:16:25.931880 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:16:25.931887 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:16:25.931894 kernel: x2apic enabled Dec 16 13:16:25.931901 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:16:25.931908 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:16:25.931915 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:16:25.931922 kernel: kvm-guest: setup PV IPIs Dec 16 13:16:25.931931 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:16:25.931938 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 16 13:16:25.931945 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 16 13:16:25.931952 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:16:25.931959 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:16:25.931966 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:16:25.931973 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:16:25.931979 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:16:25.931986 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:16:25.931996 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 13:16:25.932003 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:16:25.932010 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:16:25.932017 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:16:25.932025 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:16:25.932032 kernel: active return thunk: srso_alias_return_thunk Dec 16 13:16:25.932039 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:16:25.932045 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 16 13:16:25.932055 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:16:25.932062 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:16:25.932069 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:16:25.932075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:16:25.932082 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:16:25.932089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:16:25.932096 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 16 13:16:25.932103 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 16 13:16:25.932110 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:16:25.932119 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:16:25.932126 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:16:25.932133 kernel: landlock: Up and running. Dec 16 13:16:25.932140 kernel: SELinux: Initializing. Dec 16 13:16:25.932147 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:16:25.932154 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:16:25.932161 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 16 13:16:25.932168 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:16:25.932175 kernel: ... version: 0 Dec 16 13:16:25.932184 kernel: ... bit width: 48 Dec 16 13:16:25.932190 kernel: ... generic registers: 6 Dec 16 13:16:25.932197 kernel: ... value mask: 0000ffffffffffff Dec 16 13:16:25.932204 kernel: ... max period: 00007fffffffffff Dec 16 13:16:25.932211 kernel: ... fixed-purpose events: 0 Dec 16 13:16:25.932218 kernel: ... event mask: 000000000000003f Dec 16 13:16:25.932225 kernel: signal: max sigframe size: 3376 Dec 16 13:16:25.932231 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:16:25.932238 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:16:25.932248 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:16:25.932466 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:16:25.932473 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:16:25.932480 kernel: .... node #0, CPUs: #1 Dec 16 13:16:25.932487 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:16:25.932494 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 16 13:16:25.932501 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235488K reserved, 0K cma-reserved) Dec 16 13:16:25.932508 kernel: devtmpfs: initialized Dec 16 13:16:25.932515 kernel: x86/mm: Memory block size: 128MB Dec 16 13:16:25.932525 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:16:25.932532 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:16:25.932539 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:16:25.932546 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:16:25.932552 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:16:25.932559 kernel: audit: type=2000 audit(1765890983.098:1): state=initialized audit_enabled=0 res=1 Dec 16 13:16:25.932566 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:16:25.932573 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:16:25.932580 kernel: cpuidle: using governor menu Dec 16 13:16:25.932589 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:16:25.932596 kernel: dca service started, version 1.12.1 Dec 16 13:16:25.932603 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 13:16:25.932610 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 13:16:25.932617 kernel: PCI: Using configuration type 1 for base access Dec 16 13:16:25.932624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:16:25.932631 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:16:25.932638 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:16:25.932645 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:16:25.932654 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:16:25.932660 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:16:25.932667 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:16:25.932674 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:16:25.932681 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:16:25.932688 kernel: ACPI: Interpreter enabled Dec 16 13:16:25.932695 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:16:25.932701 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:16:25.932709 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:16:25.932719 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:16:25.932726 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:16:25.932732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:16:25.932915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:16:25.933045 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:16:25.933167 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:16:25.933176 kernel: PCI host bridge to bus 0000:00 Dec 16 13:16:25.933300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:16:25.933418 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:16:25.934583 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:16:25.934697 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 13:16:25.934808 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:16:25.935634 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 16 13:16:25.935752 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:16:25.935900 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:16:25.936036 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:16:25.936158 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 13:16:25.936277 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 13:16:25.936396 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 13:16:25.936536 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:16:25.936667 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:16:25.936793 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:16:25.938508 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 13:16:25.938638 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 13:16:25.938770 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:16:25.938892 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:16:25.939011 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 13:16:25.939137 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 13:16:25.939257 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 13:16:25.939386 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:16:25.939528 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:16:25.939656 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:16:25.939805 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 16 13:16:25.939923 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 16 13:16:25.940055 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:16:25.940173 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 13:16:25.940183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:16:25.940191 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:16:25.940198 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:16:25.940205 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:16:25.940212 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:16:25.940219 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:16:25.940230 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:16:25.940236 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:16:25.940243 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:16:25.940250 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:16:25.940257 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:16:25.940264 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:16:25.940271 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:16:25.940278 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:16:25.940285 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:16:25.940294 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:16:25.940301 kernel: iommu: Default domain type: Translated Dec 16 13:16:25.940308 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:16:25.940315 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:16:25.940322 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:16:25.940329 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 16 13:16:25.940336 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 16 13:16:25.943623 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:16:25.943759 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:16:25.943879 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:16:25.943889 kernel: vgaarb: loaded Dec 16 13:16:25.943897 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:16:25.943904 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:16:25.943911 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:16:25.943918 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:16:25.943925 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:16:25.943932 kernel: pnp: PnP ACPI init Dec 16 13:16:25.944068 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 13:16:25.944079 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:16:25.944087 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:16:25.944094 kernel: NET: Registered PF_INET protocol family Dec 16 13:16:25.944101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:16:25.944108 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:16:25.944115 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:16:25.944122 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:16:25.944132 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:16:25.944139 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:16:25.944146 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:16:25.944153 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:16:25.944160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:16:25.944167 kernel: NET: Registered PF_XDP protocol family Dec 16 13:16:25.944278 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:16:25.944388 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:16:25.946520 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:16:25.946647 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 13:16:25.946761 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 13:16:25.946873 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 16 13:16:25.946883 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:16:25.946890 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:16:25.946897 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 16 13:16:25.946905 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 16 13:16:25.946912 kernel: Initialise system trusted keyrings Dec 16 13:16:25.946922 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:16:25.946930 kernel: Key type asymmetric registered Dec 16 13:16:25.946936 kernel: Asymmetric key parser 'x509' registered Dec 16 13:16:25.946944 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:16:25.946951 kernel: io scheduler mq-deadline registered Dec 16 13:16:25.946957 kernel: io scheduler kyber registered Dec 16 13:16:25.946964 kernel: io scheduler bfq registered Dec 16 13:16:25.946971 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:16:25.946979 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:16:25.946989 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:16:25.946995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:16:25.947002 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:16:25.947010 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:16:25.947017 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:16:25.947024 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:16:25.947031 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:16:25.947162 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:16:25.947281 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:16:25.947400 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:16:25 UTC (1765890985) Dec 16 13:16:25.948750 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 13:16:25.948762 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:16:25.948770 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:16:25.948777 kernel: Segment Routing with IPv6 Dec 16 13:16:25.948784 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:16:25.948791 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:16:25.948798 kernel: Key type dns_resolver registered Dec 16 13:16:25.948810 kernel: IPI shorthand broadcast: enabled Dec 16 13:16:25.948817 kernel: sched_clock: Marking stable (2873004720, 335944940)->(3300460930, -91511270) Dec 16 13:16:25.948824 kernel: registered taskstats version 1 Dec 16 13:16:25.948831 kernel: Loading compiled-in X.509 certificates Dec 16 13:16:25.948838 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:16:25.948845 kernel: Demotion targets for Node 0: null Dec 16 13:16:25.948852 kernel: Key type .fscrypt registered Dec 16 13:16:25.948859 kernel: Key type fscrypt-provisioning registered Dec 16 13:16:25.948866 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:16:25.948876 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:16:25.948883 kernel: ima: No architecture policies found Dec 16 13:16:25.948890 kernel: clk: Disabling unused clocks Dec 16 13:16:25.948897 kernel: Warning: unable to open an initial console. Dec 16 13:16:25.948904 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:16:25.948912 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:16:25.948919 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:16:25.948926 kernel: Run /init as init process Dec 16 13:16:25.948933 kernel: with arguments: Dec 16 13:16:25.948942 kernel: /init Dec 16 13:16:25.948949 kernel: with environment: Dec 16 13:16:25.948974 kernel: HOME=/ Dec 16 13:16:25.948984 kernel: TERM=linux Dec 16 13:16:25.948992 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:16:25.949003 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:16:25.949011 systemd[1]: Detected virtualization kvm. Dec 16 13:16:25.949021 systemd[1]: Detected architecture x86-64. Dec 16 13:16:25.949029 systemd[1]: Running in initrd. Dec 16 13:16:25.949036 systemd[1]: No hostname configured, using default hostname. Dec 16 13:16:25.949044 systemd[1]: Hostname set to . Dec 16 13:16:25.949052 systemd[1]: Initializing machine ID from random generator. Dec 16 13:16:25.949059 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:16:25.949067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:16:25.949075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:16:25.949086 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:16:25.949094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:16:25.949101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:16:25.949110 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:16:25.949118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:16:25.949126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:16:25.949134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:16:25.949144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:16:25.949152 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:16:25.949160 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:16:25.949167 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:16:25.949175 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:16:25.949183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:16:25.949190 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:16:25.949198 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:16:25.949206 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:16:25.949216 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:16:25.949224 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:16:25.949234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:16:25.949242 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:16:25.949250 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:16:25.949261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:16:25.949269 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:16:25.949277 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:16:25.949284 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:16:25.949292 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:16:25.949300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:16:25.949308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:16:25.949315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:16:25.949349 systemd-journald[187]: Collecting audit messages is disabled. Dec 16 13:16:25.949371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:16:25.949379 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:16:25.949387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:16:25.949395 systemd-journald[187]: Journal started Dec 16 13:16:25.949412 systemd-journald[187]: Runtime Journal (/run/log/journal/d1e6e181c3b2418fb8f907477cfd07ec) is 8M, max 78.2M, 70.2M free. Dec 16 13:16:25.954032 systemd-modules-load[188]: Inserted module 'overlay' Dec 16 13:16:26.045176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:16:26.045200 kernel: Bridge firewalling registered Dec 16 13:16:25.984722 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 16 13:16:26.070198 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:16:26.071266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:16:26.073297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:16:26.075079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:16:26.079755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:16:26.082544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:16:26.086546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:16:26.090550 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:16:26.106739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:16:26.110456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:16:26.113936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:16:26.114185 systemd-tmpfiles[208]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:16:26.117354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:16:26.121959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:16:26.125626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:16:26.142242 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:16:26.172380 systemd-resolved[225]: Positive Trust Anchors: Dec 16 13:16:26.173253 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:16:26.173281 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:16:26.179168 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 16 13:16:26.180270 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:16:26.181402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:16:26.240471 kernel: SCSI subsystem initialized Dec 16 13:16:26.250497 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:16:26.260448 kernel: iscsi: registered transport (tcp) Dec 16 13:16:26.282133 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:16:26.282172 kernel: QLogic iSCSI HBA Driver Dec 16 13:16:26.303360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:16:26.317862 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:16:26.321068 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:16:26.374221 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:16:26.377163 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:16:26.424498 kernel: raid6: avx2x4 gen() 30112 MB/s Dec 16 13:16:26.442448 kernel: raid6: avx2x2 gen() 28034 MB/s Dec 16 13:16:26.460689 kernel: raid6: avx2x1 gen() 20932 MB/s Dec 16 13:16:26.460710 kernel: raid6: using algorithm avx2x4 gen() 30112 MB/s Dec 16 13:16:26.482254 kernel: raid6: .... xor() 4600 MB/s, rmw enabled Dec 16 13:16:26.482277 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:16:26.502460 kernel: xor: automatically using best checksumming function avx Dec 16 13:16:26.635471 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:16:26.643948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:16:26.646530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:16:26.668212 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 16 13:16:26.673991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:16:26.677176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:16:26.705968 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Dec 16 13:16:26.734496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:16:26.736760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:16:26.808959 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:16:26.812599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:16:26.885458 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 16 13:16:27.055592 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:16:27.053866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:16:27.056633 kernel: libata version 3.00 loaded. Dec 16 13:16:27.054237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:16:27.061134 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:16:27.061355 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 13:16:27.057713 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:16:27.066890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:16:27.069675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:16:27.083489 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:16:27.092459 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:16:27.096479 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:16:27.105894 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:16:27.106104 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:16:27.106306 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:16:27.113446 kernel: scsi host1: ahci Dec 16 13:16:27.113486 kernel: AES CTR mode by8 optimization enabled Dec 16 13:16:27.118456 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 13:16:27.120495 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 16 13:16:27.120677 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 13:16:27.120831 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 13:16:27.121799 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:16:27.125471 kernel: scsi host2: ahci Dec 16 13:16:27.128474 kernel: scsi host3: ahci Dec 16 13:16:27.129487 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:16:27.129514 kernel: GPT:9289727 != 167739391 Dec 16 13:16:27.129525 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:16:27.129535 kernel: GPT:9289727 != 167739391 Dec 16 13:16:27.129544 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:16:27.129553 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:16:27.132448 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 13:16:27.134496 kernel: scsi host4: ahci Dec 16 13:16:27.134703 kernel: scsi host5: ahci Dec 16 13:16:27.136859 kernel: scsi host6: ahci Dec 16 13:16:27.137043 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Dec 16 13:16:27.137055 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Dec 16 13:16:27.137066 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Dec 16 13:16:27.137075 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Dec 16 13:16:27.137085 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Dec 16 13:16:27.137094 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Dec 16 13:16:27.227537 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 13:16:27.323949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:16:27.339152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 13:16:27.346790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 13:16:27.347611 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 13:16:27.357165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:16:27.359887 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:16:27.378378 disk-uuid[596]: Primary Header is updated. Dec 16 13:16:27.378378 disk-uuid[596]: Secondary Entries is updated. Dec 16 13:16:27.378378 disk-uuid[596]: Secondary Header is updated. Dec 16 13:16:27.387487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:16:27.402497 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:16:27.450185 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.450314 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.454473 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.454498 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.458511 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.462443 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:16:27.569267 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:16:27.595657 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:16:27.596663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:16:27.598527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:16:27.601633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:16:27.623209 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:16:28.407879 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:16:28.408809 disk-uuid[597]: The operation has completed successfully. Dec 16 13:16:28.461494 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:16:28.461624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:16:28.486132 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:16:28.498977 sh[636]: Success Dec 16 13:16:28.518772 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:16:28.518813 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:16:28.519647 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:16:28.531492 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:16:28.570800 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:16:28.574498 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:16:28.583405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:16:28.594443 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (648) Dec 16 13:16:28.599001 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:16:28.599021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:16:28.608644 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:16:28.608666 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:16:28.612757 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:16:28.614476 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:16:28.616201 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:16:28.617909 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:16:28.618600 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:16:28.621191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:16:28.647444 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Dec 16 13:16:28.651464 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:16:28.655455 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:16:28.659582 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:16:28.659612 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:16:28.661666 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:16:28.670468 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:16:28.671466 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:16:28.675542 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:16:28.729250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:16:28.734545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:16:28.790911 ignition[744]: Ignition 2.22.0 Dec 16 13:16:28.790928 ignition[744]: Stage: fetch-offline Dec 16 13:16:28.793642 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:16:28.790964 ignition[744]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:28.790974 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:28.791057 ignition[744]: parsed url from cmdline: "" Dec 16 13:16:28.791062 ignition[744]: no config URL provided Dec 16 13:16:28.791067 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:16:28.791075 ignition[744]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:16:28.798734 systemd-networkd[817]: lo: Link UP Dec 16 13:16:28.791081 ignition[744]: failed to fetch config: resource requires networking Dec 16 13:16:28.798739 systemd-networkd[817]: lo: Gained carrier Dec 16 13:16:28.791358 ignition[744]: Ignition finished successfully Dec 16 13:16:28.800331 systemd-networkd[817]: Enumeration completed Dec 16 13:16:28.800400 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:16:28.800790 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:16:28.800795 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:16:28.802130 systemd-networkd[817]: eth0: Link UP Dec 16 13:16:28.802277 systemd-networkd[817]: eth0: Gained carrier Dec 16 13:16:28.802286 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:16:28.802980 systemd[1]: Reached target network.target - Network. Dec 16 13:16:28.807539 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:16:28.833438 ignition[825]: Ignition 2.22.0 Dec 16 13:16:28.834323 ignition[825]: Stage: fetch Dec 16 13:16:28.835105 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:28.835862 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:28.836850 ignition[825]: parsed url from cmdline: "" Dec 16 13:16:28.836899 ignition[825]: no config URL provided Dec 16 13:16:28.837571 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:16:28.837584 ignition[825]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:16:28.837610 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 16 13:16:28.837757 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:16:29.038650 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 16 13:16:29.038796 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:16:29.439030 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 16 13:16:29.439228 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:16:29.559495 systemd-networkd[817]: eth0: DHCPv4 address 172.232.20.218/24, gateway 172.232.20.1 acquired from 23.213.15.250 Dec 16 13:16:29.829641 systemd-networkd[817]: eth0: Gained IPv6LL Dec 16 13:16:30.240128 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 16 13:16:30.332744 ignition[825]: PUT result: OK Dec 16 13:16:30.333467 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 16 13:16:30.444314 ignition[825]: GET result: OK Dec 16 13:16:30.444741 ignition[825]: parsing config with SHA512: 3f4cc99eac94df85ec883124b71a6705ef331a8c1af168467c4c0adb765467de7ceabc5959b6278daf55cc8430d75fd2122b8b73dd02c72037c6a9eaa35c030a Dec 16 13:16:30.452457 unknown[825]: fetched base config from "system" Dec 16 13:16:30.452723 ignition[825]: fetch: fetch complete Dec 16 13:16:30.452464 unknown[825]: fetched base config from "system" Dec 16 13:16:30.452728 ignition[825]: fetch: fetch passed Dec 16 13:16:30.452470 unknown[825]: fetched user config from "akamai" Dec 16 13:16:30.452769 ignition[825]: Ignition finished successfully Dec 16 13:16:30.456349 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:16:30.469542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:16:30.502236 ignition[834]: Ignition 2.22.0 Dec 16 13:16:30.502253 ignition[834]: Stage: kargs Dec 16 13:16:30.502589 ignition[834]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:30.502600 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:30.503313 ignition[834]: kargs: kargs passed Dec 16 13:16:30.503358 ignition[834]: Ignition finished successfully Dec 16 13:16:30.508195 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:16:30.511560 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:16:30.540998 ignition[840]: Ignition 2.22.0 Dec 16 13:16:30.541013 ignition[840]: Stage: disks Dec 16 13:16:30.541129 ignition[840]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:30.541139 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:30.541827 ignition[840]: disks: disks passed Dec 16 13:16:30.541865 ignition[840]: Ignition finished successfully Dec 16 13:16:30.545452 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:16:30.546518 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:16:30.547675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:16:30.549103 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:16:30.550695 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:16:30.552244 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:16:30.554447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:16:30.582262 systemd-fsck[848]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:16:30.585566 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:16:30.588491 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:16:30.687460 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:16:30.688327 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:16:30.689391 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:16:30.691516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:16:30.693718 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:16:30.696510 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:16:30.696557 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:16:30.696581 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:16:30.703304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:16:30.705725 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:16:30.713752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (856) Dec 16 13:16:30.713777 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:16:30.718161 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:16:30.727982 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:16:30.728005 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:16:30.728016 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:16:30.730590 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:16:30.758752 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:16:30.764052 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:16:30.768460 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:16:30.773446 initrd-setup-root[901]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:16:30.861266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:16:30.863255 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:16:30.864567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:16:30.882464 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:16:30.885921 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:16:30.900576 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:16:30.918554 ignition[969]: INFO : Ignition 2.22.0 Dec 16 13:16:30.918554 ignition[969]: INFO : Stage: mount Dec 16 13:16:30.920556 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:30.920556 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:30.920556 ignition[969]: INFO : mount: mount passed Dec 16 13:16:30.920556 ignition[969]: INFO : Ignition finished successfully Dec 16 13:16:30.921671 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:16:30.925492 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:16:31.689933 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:16:31.719456 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Dec 16 13:16:31.719485 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:16:31.724792 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:16:31.729872 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:16:31.729898 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:16:31.733682 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:16:31.735905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:16:31.763145 ignition[996]: INFO : Ignition 2.22.0 Dec 16 13:16:31.763145 ignition[996]: INFO : Stage: files Dec 16 13:16:31.764802 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:31.764802 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:31.764802 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:16:31.764802 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:16:31.764802 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:16:31.769866 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:16:31.769866 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:16:31.769866 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:16:31.768586 unknown[996]: wrote ssh authorized keys file for user: core Dec 16 13:16:31.773602 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:16:31.773602 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:16:31.965023 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:16:32.061008 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:16:32.062719 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:16:32.071668 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:16:32.609978 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:16:32.825734 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:16:32.825734 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:16:32.828847 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:16:32.843979 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:16:32.843979 ignition[996]: INFO : files: files passed Dec 16 13:16:32.843979 ignition[996]: INFO : Ignition finished successfully Dec 16 13:16:32.832070 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:16:32.836627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:16:32.846735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:16:32.849107 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:16:32.850499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:16:32.867062 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:16:32.867062 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:16:32.870162 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:16:32.872035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:16:32.874589 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:16:32.876265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:16:32.921866 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:16:32.921993 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:16:32.923921 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:16:32.925214 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:16:32.926855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:16:32.927584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:16:32.950942 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:16:32.954824 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:16:32.978745 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:16:32.980702 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:16:32.981708 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:16:32.983477 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:16:32.983731 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:16:32.985488 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:16:32.986677 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:16:32.988342 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:16:32.989941 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:16:32.991330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:16:32.993078 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:16:32.994827 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:16:32.996391 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:16:32.998189 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:16:32.999836 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:16:33.001974 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:16:33.003704 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:16:33.003842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:16:33.005768 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:16:33.006972 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:16:33.008374 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:16:33.008499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:16:33.010104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:16:33.010240 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:16:33.012237 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:16:33.012344 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:16:33.013365 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:16:33.013521 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:16:33.016488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:16:33.019599 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:16:33.021103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:16:33.021253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:16:33.022592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:16:33.022689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:16:33.031226 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:16:33.031482 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:16:33.057458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:16:33.067073 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:16:33.067457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:16:33.072244 ignition[1050]: INFO : Ignition 2.22.0 Dec 16 13:16:33.072244 ignition[1050]: INFO : Stage: umount Dec 16 13:16:33.072244 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:16:33.072244 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:16:33.072244 ignition[1050]: INFO : umount: umount passed Dec 16 13:16:33.072244 ignition[1050]: INFO : Ignition finished successfully Dec 16 13:16:33.073100 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:16:33.073240 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:16:33.074763 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:16:33.074813 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:16:33.076072 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:16:33.076129 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:16:33.077861 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:16:33.077907 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:16:33.079222 systemd[1]: Stopped target network.target - Network. Dec 16 13:16:33.080905 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:16:33.080960 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:16:33.082332 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:16:33.083691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:16:33.088460 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:16:33.089302 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:16:33.090995 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:16:33.092618 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:16:33.092665 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:16:33.094154 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:16:33.094194 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:16:33.095546 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:16:33.095599 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:16:33.097275 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:16:33.097321 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:16:33.099080 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:16:33.099128 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:16:33.100777 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:16:33.102058 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:16:33.105775 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:16:33.105897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:16:33.110412 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:16:33.110761 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:16:33.110887 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:16:33.113363 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:16:33.113919 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:16:33.115552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:16:33.115594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:16:33.117684 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:16:33.118894 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:16:33.118946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:16:33.121921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:16:33.121969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:16:33.124866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:16:33.124918 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:16:33.125657 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:16:33.125705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:16:33.129606 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:16:33.132161 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:16:33.132225 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:16:33.146324 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:16:33.146471 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:16:33.148462 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:16:33.148807 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:16:33.150213 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:16:33.150284 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:16:33.151570 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:16:33.151606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:16:33.152949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:16:33.152999 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:16:33.155180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:16:33.155229 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:16:33.156724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:16:33.156773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:16:33.159626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:16:33.162857 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:16:33.162909 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:16:33.164851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:16:33.165271 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:16:33.167141 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:16:33.167190 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:16:33.168663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:16:33.168709 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:16:33.170312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:16:33.170361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:16:33.172300 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:16:33.172357 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 13:16:33.172401 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:16:33.172464 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:16:33.178934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:16:33.179043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:16:33.180198 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:16:33.182204 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:16:33.203047 systemd[1]: Switching root. Dec 16 13:16:33.235830 systemd-journald[187]: Journal stopped Dec 16 13:16:34.458146 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 16 13:16:34.458177 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:16:34.458190 kernel: SELinux: policy capability open_perms=1 Dec 16 13:16:34.458200 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:16:34.458208 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:16:34.458220 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:16:34.458229 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:16:34.458239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:16:34.458248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:16:34.458258 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:16:34.458267 kernel: audit: type=1403 audit(1765890993.392:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:16:34.458278 systemd[1]: Successfully loaded SELinux policy in 71.067ms. Dec 16 13:16:34.458291 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.689ms. Dec 16 13:16:34.458302 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:16:34.458313 systemd[1]: Detected virtualization kvm. Dec 16 13:16:34.458322 systemd[1]: Detected architecture x86-64. Dec 16 13:16:34.458335 systemd[1]: Detected first boot. Dec 16 13:16:34.458345 systemd[1]: Initializing machine ID from random generator. Dec 16 13:16:34.458355 zram_generator::config[1094]: No configuration found. Dec 16 13:16:34.458366 kernel: Guest personality initialized and is inactive Dec 16 13:16:34.458375 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:16:34.458385 kernel: Initialized host personality Dec 16 13:16:34.458394 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:16:34.458404 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:16:34.458417 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:16:34.458451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:16:34.458461 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:16:34.458471 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:16:34.458481 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:16:34.458492 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:16:34.458502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:16:34.458515 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:16:34.458526 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:16:34.458536 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:16:34.458546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:16:34.458556 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:16:34.458566 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:16:34.458577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:16:34.458587 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:16:34.458599 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:16:34.458612 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:16:34.458623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:16:34.458633 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:16:34.458644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:16:34.458654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:16:34.458664 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:16:34.458677 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:16:34.458687 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:16:34.458697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:16:34.458709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:16:34.458719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:16:34.458730 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:16:34.458740 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:16:34.458751 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:16:34.458761 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:16:34.458774 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:16:34.458784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:16:34.458795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:16:34.458806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:16:34.458818 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:16:34.458829 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:16:34.458839 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:16:34.458850 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:16:34.458860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:34.458871 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:16:34.458881 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:16:34.458892 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:16:34.458905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:16:34.458915 systemd[1]: Reached target machines.target - Containers. Dec 16 13:16:34.458925 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:16:34.458936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:16:34.458947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:16:34.458957 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:16:34.458968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:16:34.458978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:16:34.458988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:16:34.459001 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:16:34.459011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:16:34.459022 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:16:34.459032 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:16:34.459043 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:16:34.459053 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:16:34.459064 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:16:34.459074 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:16:34.459087 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:16:34.459097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:16:34.459108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:16:34.459118 kernel: loop: module loaded Dec 16 13:16:34.459130 kernel: ACPI: bus type drm_connector registered Dec 16 13:16:34.459140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:16:34.459150 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:16:34.459161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:16:34.459173 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:16:34.459183 systemd[1]: Stopped verity-setup.service. Dec 16 13:16:34.459194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:34.459228 systemd-journald[1171]: Collecting audit messages is disabled. Dec 16 13:16:34.459252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:16:34.459263 kernel: fuse: init (API version 7.41) Dec 16 13:16:34.459273 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:16:34.459283 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:16:34.459294 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:16:34.459304 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:16:34.459314 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:16:34.459325 systemd-journald[1171]: Journal started Dec 16 13:16:34.459347 systemd-journald[1171]: Runtime Journal (/run/log/journal/69a8d8bd50f146beb39f5cd0f88dd281) is 8M, max 78.2M, 70.2M free. Dec 16 13:16:34.459699 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:16:34.066354 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:16:34.092520 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:16:34.093410 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:16:34.466516 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:16:34.470520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:16:34.470747 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:16:34.474014 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:16:34.475202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:16:34.475723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:16:34.476983 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:16:34.477240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:16:34.478329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:16:34.479047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:16:34.480161 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:16:34.480418 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:16:34.481548 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:16:34.481804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:16:34.482974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:16:34.484253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:16:34.485703 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:16:34.486962 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:16:34.502230 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:16:34.507515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:16:34.510566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:16:34.511316 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:16:34.511342 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:16:34.513006 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:16:34.517545 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:16:34.519784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:16:34.522864 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:16:34.527739 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:16:34.530532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:16:34.531571 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:16:34.534167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:16:34.536532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:16:34.538676 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:16:34.543897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:16:34.546351 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:16:34.549009 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:16:34.582291 systemd-journald[1171]: Time spent on flushing to /var/log/journal/69a8d8bd50f146beb39f5cd0f88dd281 is 79.947ms for 1010 entries. Dec 16 13:16:34.582291 systemd-journald[1171]: System Journal (/var/log/journal/69a8d8bd50f146beb39f5cd0f88dd281) is 8M, max 195.6M, 187.6M free. Dec 16 13:16:34.689952 systemd-journald[1171]: Received client request to flush runtime journal. Dec 16 13:16:34.689998 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 13:16:34.690015 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:16:34.690044 kernel: loop1: detected capacity change from 0 to 8 Dec 16 13:16:34.595496 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:16:34.598182 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:16:34.603743 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:16:34.628127 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:16:34.642527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:16:34.651086 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:16:34.666336 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Dec 16 13:16:34.666349 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Dec 16 13:16:34.682253 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:16:34.687771 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:16:34.693965 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:16:34.711454 kernel: loop2: detected capacity change from 0 to 224512 Dec 16 13:16:34.732065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:16:34.739323 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:16:34.757445 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:16:34.780910 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 16 13:16:34.781168 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Dec 16 13:16:34.786862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:16:34.799471 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:16:34.822369 kernel: loop5: detected capacity change from 0 to 8 Dec 16 13:16:34.827558 kernel: loop6: detected capacity change from 0 to 224512 Dec 16 13:16:34.849448 kernel: loop7: detected capacity change from 0 to 110984 Dec 16 13:16:34.860887 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 16 13:16:34.862536 (sd-merge)[1247]: Merged extensions into '/usr'. Dec 16 13:16:34.872520 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:16:34.872762 systemd[1]: Reloading... Dec 16 13:16:34.983482 zram_generator::config[1276]: No configuration found. Dec 16 13:16:35.080452 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:16:35.220348 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:16:35.220635 systemd[1]: Reloading finished in 347 ms. Dec 16 13:16:35.249992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:16:35.251314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:16:35.252594 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:16:35.263104 systemd[1]: Starting ensure-sysext.service... Dec 16 13:16:35.266531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:16:35.274676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:16:35.291952 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:16:35.291971 systemd[1]: Reloading... Dec 16 13:16:35.292622 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:16:35.292876 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:16:35.293193 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:16:35.293498 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:16:35.294567 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:16:35.294807 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 16 13:16:35.294875 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Dec 16 13:16:35.301534 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:16:35.301552 systemd-tmpfiles[1318]: Skipping /boot Dec 16 13:16:35.326931 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:16:35.326950 systemd-tmpfiles[1318]: Skipping /boot Dec 16 13:16:35.336228 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Dec 16 13:16:35.381772 zram_generator::config[1348]: No configuration found. Dec 16 13:16:35.617457 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:16:35.635443 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:16:35.646448 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:16:35.652444 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:16:35.652690 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:16:35.671036 systemd[1]: Reloading finished in 377 ms. Dec 16 13:16:35.678916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:16:35.681963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:16:35.704821 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:16:35.710588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:16:35.715822 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:16:35.720251 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:16:35.726513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:16:35.733372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:16:35.742918 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:16:35.748683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.748849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:16:35.751804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:16:35.756832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:16:35.760649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:16:35.762461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:16:35.762561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:16:35.762643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.770130 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:16:35.774306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.774488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:16:35.774646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:16:35.774729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:16:35.774808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.779295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.779551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:16:35.782661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:16:35.783526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:16:35.783623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:16:35.783744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:16:35.791637 systemd[1]: Finished ensure-sysext.service. Dec 16 13:16:35.803529 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:16:35.807469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:16:35.821979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:16:35.825086 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:16:35.833747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:16:35.838546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:16:35.845589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:16:35.847496 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:16:35.849849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:16:35.854667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:16:35.857915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:16:35.875165 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:16:35.875450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:16:35.877588 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:16:35.877854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:16:35.879110 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:16:35.881805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:16:35.893742 augenrules[1480]: No rules Dec 16 13:16:35.897153 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:16:35.897538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:16:35.904457 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:16:35.945671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:16:35.954614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:16:35.958131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:16:35.992477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:16:36.019391 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:16:36.160202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:16:36.187479 systemd-resolved[1440]: Positive Trust Anchors: Dec 16 13:16:36.187768 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:16:36.187837 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:16:36.188055 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:16:36.188900 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:16:36.190123 systemd-networkd[1439]: lo: Link UP Dec 16 13:16:36.190137 systemd-networkd[1439]: lo: Gained carrier Dec 16 13:16:36.193302 systemd-resolved[1440]: Defaulting to hostname 'linux'. Dec 16 13:16:36.193668 systemd-timesyncd[1454]: No network connectivity, watching for changes. Dec 16 13:16:36.193790 systemd-networkd[1439]: Enumeration completed Dec 16 13:16:36.193859 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:16:36.195116 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:16:36.195131 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:16:36.195542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:16:36.196375 systemd[1]: Reached target network.target - Network. Dec 16 13:16:36.197631 systemd-networkd[1439]: eth0: Link UP Dec 16 13:16:36.197789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:16:36.198660 systemd-networkd[1439]: eth0: Gained carrier Dec 16 13:16:36.198683 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:16:36.199024 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:16:36.199912 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:16:36.200713 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:16:36.201478 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:16:36.202364 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:16:36.203314 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:16:36.204257 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:16:36.205016 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:16:36.205047 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:16:36.205863 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:16:36.207614 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:16:36.210026 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:16:36.233934 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:16:36.234840 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:16:36.235602 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:16:36.252219 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:16:36.253714 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:16:36.255733 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:16:36.257464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:16:36.259956 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:16:36.262132 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:16:36.263104 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:16:36.264033 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:16:36.264069 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:16:36.275393 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:16:36.277641 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:16:36.282621 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:16:36.290520 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:16:36.292908 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:16:36.297162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:16:36.297892 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:16:36.300633 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:16:36.308750 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:16:36.312582 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:16:36.319086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:16:36.322009 jq[1515]: false Dec 16 13:16:36.322954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:16:36.350275 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:16:36.351992 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:16:36.353652 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:16:36.354253 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:16:36.357331 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:16:36.365219 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:16:36.369034 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:16:36.370280 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:16:36.371155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:16:36.381516 update_engine[1528]: I20251216 13:16:36.381456 1528 main.cc:92] Flatcar Update Engine starting Dec 16 13:16:36.386207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:16:36.387208 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:16:36.388364 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing passwd entry cache Dec 16 13:16:36.388376 oslogin_cache_refresh[1517]: Refreshing passwd entry cache Dec 16 13:16:36.404900 coreos-metadata[1512]: Dec 16 13:16:36.404 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:16:36.406391 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting users, quitting Dec 16 13:16:36.407819 oslogin_cache_refresh[1517]: Failure getting users, quitting Dec 16 13:16:36.407899 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:16:36.407929 oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:16:36.408020 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing group entry cache Dec 16 13:16:36.408372 oslogin_cache_refresh[1517]: Refreshing group entry cache Dec 16 13:16:36.408945 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting groups, quitting Dec 16 13:16:36.410489 oslogin_cache_refresh[1517]: Failure getting groups, quitting Dec 16 13:16:36.410594 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:16:36.410625 oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:16:36.417090 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:16:36.417367 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:16:36.421108 extend-filesystems[1516]: Found /dev/sda6 Dec 16 13:16:36.432958 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:16:36.433239 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:16:36.438968 jq[1529]: true Dec 16 13:16:36.445758 extend-filesystems[1516]: Found /dev/sda9 Dec 16 13:16:36.448808 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:16:36.454698 dbus-daemon[1513]: [system] SELinux support is enabled Dec 16 13:16:36.454872 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:16:36.455991 tar[1536]: linux-amd64/LICENSE Dec 16 13:16:36.463769 tar[1536]: linux-amd64/helm Dec 16 13:16:36.460768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:16:36.463931 extend-filesystems[1516]: Checking size of /dev/sda9 Dec 16 13:16:36.460797 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:16:36.462375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:16:36.462391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:16:36.466195 jq[1556]: true Dec 16 13:16:36.480116 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:16:36.483479 update_engine[1528]: I20251216 13:16:36.483232 1528 update_check_scheduler.cc:74] Next update check in 9m55s Dec 16 13:16:36.502649 extend-filesystems[1516]: Resized partition /dev/sda9 Dec 16 13:16:36.507260 extend-filesystems[1565]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:16:36.519303 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 16 13:16:36.514681 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:16:36.615804 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:16:36.615842 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:16:36.622053 systemd-logind[1525]: New seat seat0. Dec 16 13:16:36.625651 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:16:36.653009 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:16:36.652828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:16:36.660130 systemd[1]: Starting sshkeys.service... Dec 16 13:16:36.668972 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:16:36.747134 containerd[1552]: time="2025-12-16T13:16:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:16:36.748252 containerd[1552]: time="2025-12-16T13:16:36.747643810Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:16:36.761619 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:16:36.766519 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.766791830Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.29µs" Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.766810590Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.766825500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767005820Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767020920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767041050Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767099040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767110020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767333950Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767346410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767355690Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:16:36.769567 containerd[1552]: time="2025-12-16T13:16:36.767363150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.767479000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.767701810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.767739360Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.767749390Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.767791840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.768043020Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:16:36.775190 containerd[1552]: time="2025-12-16T13:16:36.768125900Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:16:36.783657 containerd[1552]: time="2025-12-16T13:16:36.783624820Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:16:36.783754 containerd[1552]: time="2025-12-16T13:16:36.783674170Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:16:36.783754 containerd[1552]: time="2025-12-16T13:16:36.783762260Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:16:36.783827 containerd[1552]: time="2025-12-16T13:16:36.783775990Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:16:36.783827 containerd[1552]: time="2025-12-16T13:16:36.783799170Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:16:36.783827 containerd[1552]: time="2025-12-16T13:16:36.783808080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:16:36.783827 containerd[1552]: time="2025-12-16T13:16:36.783817840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:16:36.783827 containerd[1552]: time="2025-12-16T13:16:36.783827430Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:16:36.783952 containerd[1552]: time="2025-12-16T13:16:36.783837650Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:16:36.783952 containerd[1552]: time="2025-12-16T13:16:36.783846690Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:16:36.783952 containerd[1552]: time="2025-12-16T13:16:36.783855110Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:16:36.783952 containerd[1552]: time="2025-12-16T13:16:36.783865340Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.783970270Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.783987340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.783999810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784009790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784019520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784028660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784037970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784046810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784056350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784065240Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784074620Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784114510Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784124820Z" level=info msg="Start snapshots syncer" Dec 16 13:16:36.784362 containerd[1552]: time="2025-12-16T13:16:36.784144400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:16:36.785294 containerd[1552]: time="2025-12-16T13:16:36.785203720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:16:36.786140 locksmithd[1562]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:16:36.786867 containerd[1552]: time="2025-12-16T13:16:36.786543260Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:16:36.786867 containerd[1552]: time="2025-12-16T13:16:36.786694970Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:16:36.787482 containerd[1552]: time="2025-12-16T13:16:36.787293930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:16:36.787482 containerd[1552]: time="2025-12-16T13:16:36.787325380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:16:36.787482 containerd[1552]: time="2025-12-16T13:16:36.787338630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:16:36.788564 containerd[1552]: time="2025-12-16T13:16:36.788529320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:16:36.788596 containerd[1552]: time="2025-12-16T13:16:36.788564880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:16:36.788596 containerd[1552]: time="2025-12-16T13:16:36.788581180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:16:36.788632 containerd[1552]: time="2025-12-16T13:16:36.788603810Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788628680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788642300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788654710Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788690510Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788703650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788714200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788725830Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788735940Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788744960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788763470Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788781190Z" level=info msg="runtime interface created" Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788786840Z" level=info msg="created NRI interface" Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788795250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788808120Z" level=info msg="Connect containerd service" Dec 16 13:16:36.788936 containerd[1552]: time="2025-12-16T13:16:36.788827380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:16:36.791201 containerd[1552]: time="2025-12-16T13:16:36.791144920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:16:36.806453 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:16:36.813281 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:16:36.838199 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:16:36.839570 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:16:36.844180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:16:36.872898 coreos-metadata[1600]: Dec 16 13:16:36.872 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:16:36.875826 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:16:36.881268 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:16:36.884745 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:16:36.886635 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:16:36.913375 containerd[1552]: time="2025-12-16T13:16:36.913328060Z" level=info msg="Start subscribing containerd event" Dec 16 13:16:36.913442 containerd[1552]: time="2025-12-16T13:16:36.913381320Z" level=info msg="Start recovering state" Dec 16 13:16:36.913536 containerd[1552]: time="2025-12-16T13:16:36.913512270Z" level=info msg="Start event monitor" Dec 16 13:16:36.913773 containerd[1552]: time="2025-12-16T13:16:36.913731410Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:16:36.913773 containerd[1552]: time="2025-12-16T13:16:36.913741750Z" level=info msg="Start streaming server" Dec 16 13:16:36.913773 containerd[1552]: time="2025-12-16T13:16:36.913755040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:16:36.913773 containerd[1552]: time="2025-12-16T13:16:36.913761680Z" level=info msg="runtime interface starting up..." Dec 16 13:16:36.913773 containerd[1552]: time="2025-12-16T13:16:36.913767670Z" level=info msg="starting plugins..." Dec 16 13:16:36.913864 containerd[1552]: time="2025-12-16T13:16:36.913781080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:16:36.914289 containerd[1552]: time="2025-12-16T13:16:36.914181190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:16:36.914289 containerd[1552]: time="2025-12-16T13:16:36.914235450Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:16:36.914357 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:16:36.915438 containerd[1552]: time="2025-12-16T13:16:36.915374350Z" level=info msg="containerd successfully booted in 0.169096s" Dec 16 13:16:36.932570 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:16:36.933285 systemd-networkd[1439]: eth0: DHCPv4 address 172.232.20.218/24, gateway 172.232.20.1 acquired from 23.213.15.250 Dec 16 13:16:36.937662 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:16:36.940080 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Dec 16 13:16:36.951466 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 16 13:16:36.962024 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:16:36.962024 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 16 13:16:36.962024 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 16 13:16:36.968857 extend-filesystems[1516]: Resized filesystem in /dev/sda9 Dec 16 13:16:36.964217 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:16:36.965051 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:16:37.015186 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:16:37.016341 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:16:37.017757 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1628 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:16:37.023667 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:16:37.771926 systemd-timesyncd[1454]: Contacted time server 192.155.94.72:123 (1.flatcar.pool.ntp.org). Dec 16 13:16:37.772144 systemd-timesyncd[1454]: Initial clock synchronization to Tue 2025-12-16 13:16:37.771292 UTC. Dec 16 13:16:37.772673 systemd-resolved[1440]: Clock change detected. Flushing caches. Dec 16 13:16:37.811868 polkitd[1631]: Started polkitd version 126 Dec 16 13:16:37.816107 polkitd[1631]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:16:37.816359 polkitd[1631]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:16:37.816402 polkitd[1631]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:16:37.816622 polkitd[1631]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:16:37.816655 polkitd[1631]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:16:37.816690 polkitd[1631]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:16:37.817422 polkitd[1631]: Finished loading, compiling and executing 2 rules Dec 16 13:16:37.818985 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:16:37.819178 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:16:37.819508 polkitd[1631]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:16:37.824149 tar[1536]: linux-amd64/README.md Dec 16 13:16:37.831529 systemd-resolved[1440]: System hostname changed to '172-232-20-218'. Dec 16 13:16:37.831631 systemd-hostnamed[1628]: Hostname set to <172-232-20-218> (transient) Dec 16 13:16:37.842404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:16:38.126610 coreos-metadata[1512]: Dec 16 13:16:38.126 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 16 13:16:38.229487 coreos-metadata[1512]: Dec 16 13:16:38.229 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 16 13:16:38.430995 coreos-metadata[1512]: Dec 16 13:16:38.430 INFO Fetch successful Dec 16 13:16:38.430995 coreos-metadata[1512]: Dec 16 13:16:38.430 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 16 13:16:38.477735 systemd-networkd[1439]: eth0: Gained IPv6LL Dec 16 13:16:38.480899 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:16:38.482310 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:16:38.485250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:16:38.488769 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:16:38.515270 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:16:38.594240 coreos-metadata[1600]: Dec 16 13:16:38.594 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 16 13:16:38.685330 coreos-metadata[1600]: Dec 16 13:16:38.685 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 16 13:16:38.689498 coreos-metadata[1512]: Dec 16 13:16:38.689 INFO Fetch successful Dec 16 13:16:38.785098 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:16:38.786553 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:16:38.819103 coreos-metadata[1600]: Dec 16 13:16:38.819 INFO Fetch successful Dec 16 13:16:38.840427 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:16:38.842040 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:16:38.844424 systemd[1]: Finished sshkeys.service. Dec 16 13:16:39.345904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:16:39.347048 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:16:39.349228 systemd[1]: Startup finished in 2.942s (kernel) + 7.732s (initrd) + 5.314s (userspace) = 15.988s. Dec 16 13:16:39.353067 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:16:39.829176 kubelet[1687]: E1216 13:16:39.828869 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:16:39.832229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:16:39.832422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:16:39.833044 systemd[1]: kubelet.service: Consumed 829ms CPU time, 265.2M memory peak. Dec 16 13:16:41.132050 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:16:41.133501 systemd[1]: Started sshd@0-172.232.20.218:22-139.178.89.65:50708.service - OpenSSH per-connection server daemon (139.178.89.65:50708). Dec 16 13:16:41.495624 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 50708 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:41.497464 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:41.504482 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:16:41.506087 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:16:41.514515 systemd-logind[1525]: New session 1 of user core. Dec 16 13:16:41.526312 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:16:41.529363 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:16:41.540511 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:16:41.543165 systemd-logind[1525]: New session c1 of user core. Dec 16 13:16:41.666743 systemd[1703]: Queued start job for default target default.target. Dec 16 13:16:41.674018 systemd[1703]: Created slice app.slice - User Application Slice. Dec 16 13:16:41.674046 systemd[1703]: Reached target paths.target - Paths. Dec 16 13:16:41.674089 systemd[1703]: Reached target timers.target - Timers. Dec 16 13:16:41.675475 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:16:41.703945 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:16:41.704062 systemd[1703]: Reached target sockets.target - Sockets. Dec 16 13:16:41.704099 systemd[1703]: Reached target basic.target - Basic System. Dec 16 13:16:41.704142 systemd[1703]: Reached target default.target - Main User Target. Dec 16 13:16:41.704176 systemd[1703]: Startup finished in 154ms. Dec 16 13:16:41.704314 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:16:41.719684 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:16:41.981402 systemd[1]: Started sshd@1-172.232.20.218:22-139.178.89.65:50712.service - OpenSSH per-connection server daemon (139.178.89.65:50712). Dec 16 13:16:42.325296 sshd[1714]: Accepted publickey for core from 139.178.89.65 port 50712 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:42.328830 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:42.335606 systemd-logind[1525]: New session 2 of user core. Dec 16 13:16:42.342676 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:16:42.580820 sshd[1717]: Connection closed by 139.178.89.65 port 50712 Dec 16 13:16:42.581612 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:42.586380 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:16:42.587318 systemd[1]: sshd@1-172.232.20.218:22-139.178.89.65:50712.service: Deactivated successfully. Dec 16 13:16:42.590090 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:16:42.592184 systemd-logind[1525]: Removed session 2. Dec 16 13:16:42.641516 systemd[1]: Started sshd@2-172.232.20.218:22-139.178.89.65:50720.service - OpenSSH per-connection server daemon (139.178.89.65:50720). Dec 16 13:16:42.988735 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 50720 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:42.989919 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:42.994676 systemd-logind[1525]: New session 3 of user core. Dec 16 13:16:43.001675 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:16:43.235491 sshd[1726]: Connection closed by 139.178.89.65 port 50720 Dec 16 13:16:43.236130 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:43.240381 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:16:43.241350 systemd[1]: sshd@2-172.232.20.218:22-139.178.89.65:50720.service: Deactivated successfully. Dec 16 13:16:43.243367 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:16:43.244962 systemd-logind[1525]: Removed session 3. Dec 16 13:16:43.307795 systemd[1]: Started sshd@3-172.232.20.218:22-139.178.89.65:50722.service - OpenSSH per-connection server daemon (139.178.89.65:50722). Dec 16 13:16:43.655381 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 50722 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:43.656837 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:43.661083 systemd-logind[1525]: New session 4 of user core. Dec 16 13:16:43.675689 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:16:43.911355 sshd[1736]: Connection closed by 139.178.89.65 port 50722 Dec 16 13:16:43.912208 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:43.922192 systemd[1]: sshd@3-172.232.20.218:22-139.178.89.65:50722.service: Deactivated successfully. Dec 16 13:16:43.924329 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:16:43.925167 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:16:43.926763 systemd-logind[1525]: Removed session 4. Dec 16 13:16:43.971292 systemd[1]: Started sshd@4-172.232.20.218:22-139.178.89.65:50734.service - OpenSSH per-connection server daemon (139.178.89.65:50734). Dec 16 13:16:44.309225 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 50734 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:44.310824 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:44.318434 systemd-logind[1525]: New session 5 of user core. Dec 16 13:16:44.324681 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:16:44.511214 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:16:44.511647 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:16:44.532227 sudo[1746]: pam_unix(sudo:session): session closed for user root Dec 16 13:16:44.582830 sshd[1745]: Connection closed by 139.178.89.65 port 50734 Dec 16 13:16:44.583366 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:44.587382 systemd[1]: sshd@4-172.232.20.218:22-139.178.89.65:50734.service: Deactivated successfully. Dec 16 13:16:44.589226 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:16:44.591450 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:16:44.592716 systemd-logind[1525]: Removed session 5. Dec 16 13:16:44.642780 systemd[1]: Started sshd@5-172.232.20.218:22-139.178.89.65:50746.service - OpenSSH per-connection server daemon (139.178.89.65:50746). Dec 16 13:16:44.977440 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 50746 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:44.979300 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:44.984213 systemd-logind[1525]: New session 6 of user core. Dec 16 13:16:44.986675 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:16:45.176028 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:16:45.176369 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:16:45.181143 sudo[1757]: pam_unix(sudo:session): session closed for user root Dec 16 13:16:45.186852 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:16:45.187158 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:16:45.198216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:16:45.235860 augenrules[1779]: No rules Dec 16 13:16:45.237342 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:16:45.237690 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:16:45.238615 sudo[1756]: pam_unix(sudo:session): session closed for user root Dec 16 13:16:45.289473 sshd[1755]: Connection closed by 139.178.89.65 port 50746 Dec 16 13:16:45.289942 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:45.293378 systemd[1]: sshd@5-172.232.20.218:22-139.178.89.65:50746.service: Deactivated successfully. Dec 16 13:16:45.295426 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:16:45.296746 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:16:45.298602 systemd-logind[1525]: Removed session 6. Dec 16 13:16:45.355148 systemd[1]: Started sshd@6-172.232.20.218:22-139.178.89.65:50754.service - OpenSSH per-connection server daemon (139.178.89.65:50754). Dec 16 13:16:45.717358 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 50754 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:16:45.718918 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:45.723951 systemd-logind[1525]: New session 7 of user core. Dec 16 13:16:45.728687 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:16:45.923464 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:16:45.923803 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:16:46.202306 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:16:46.216894 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:16:46.416049 dockerd[1810]: time="2025-12-16T13:16:46.415792998Z" level=info msg="Starting up" Dec 16 13:16:46.416826 dockerd[1810]: time="2025-12-16T13:16:46.416799818Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:16:46.427538 dockerd[1810]: time="2025-12-16T13:16:46.427504138Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:16:46.469921 dockerd[1810]: time="2025-12-16T13:16:46.469835788Z" level=info msg="Loading containers: start." Dec 16 13:16:46.480578 kernel: Initializing XFRM netlink socket Dec 16 13:16:46.718820 systemd-networkd[1439]: docker0: Link UP Dec 16 13:16:46.721474 dockerd[1810]: time="2025-12-16T13:16:46.721397598Z" level=info msg="Loading containers: done." Dec 16 13:16:46.736136 dockerd[1810]: time="2025-12-16T13:16:46.736093778Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:16:46.736269 dockerd[1810]: time="2025-12-16T13:16:46.736157348Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:16:46.736269 dockerd[1810]: time="2025-12-16T13:16:46.736234678Z" level=info msg="Initializing buildkit" Dec 16 13:16:46.755189 dockerd[1810]: time="2025-12-16T13:16:46.755161498Z" level=info msg="Completed buildkit initialization" Dec 16 13:16:46.761130 dockerd[1810]: time="2025-12-16T13:16:46.761090628Z" level=info msg="Daemon has completed initialization" Dec 16 13:16:46.761458 dockerd[1810]: time="2025-12-16T13:16:46.761202278Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:16:46.761407 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:16:47.543694 containerd[1552]: time="2025-12-16T13:16:47.543651138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:16:48.207610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353450368.mount: Deactivated successfully. Dec 16 13:16:49.368719 containerd[1552]: time="2025-12-16T13:16:49.368650518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:49.369926 containerd[1552]: time="2025-12-16T13:16:49.369523728Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 16 13:16:49.370515 containerd[1552]: time="2025-12-16T13:16:49.370491438Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:49.372671 containerd[1552]: time="2025-12-16T13:16:49.372645258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:49.373536 containerd[1552]: time="2025-12-16T13:16:49.373510898Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.82982839s" Dec 16 13:16:49.373600 containerd[1552]: time="2025-12-16T13:16:49.373540658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:16:49.374164 containerd[1552]: time="2025-12-16T13:16:49.374147438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:16:49.856702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:16:49.858778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:16:50.044515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:16:50.050872 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:16:50.097003 kubelet[2088]: E1216 13:16:50.096897 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:16:50.101796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:16:50.102021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:16:50.103680 systemd[1]: kubelet.service: Consumed 198ms CPU time, 109M memory peak. Dec 16 13:16:50.815289 containerd[1552]: time="2025-12-16T13:16:50.815197308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:50.816445 containerd[1552]: time="2025-12-16T13:16:50.816169818Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 16 13:16:50.816650 containerd[1552]: time="2025-12-16T13:16:50.816628868Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:50.819235 containerd[1552]: time="2025-12-16T13:16:50.819213148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:50.820102 containerd[1552]: time="2025-12-16T13:16:50.820068438Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.44584759s" Dec 16 13:16:50.820149 containerd[1552]: time="2025-12-16T13:16:50.820104638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:16:50.821135 containerd[1552]: time="2025-12-16T13:16:50.821106668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:16:52.020481 containerd[1552]: time="2025-12-16T13:16:52.020419338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:52.021433 containerd[1552]: time="2025-12-16T13:16:52.021285608Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 16 13:16:52.021991 containerd[1552]: time="2025-12-16T13:16:52.021965298Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:52.024153 containerd[1552]: time="2025-12-16T13:16:52.024113598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:52.024970 containerd[1552]: time="2025-12-16T13:16:52.024949878Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.20381347s" Dec 16 13:16:52.025048 containerd[1552]: time="2025-12-16T13:16:52.025034888Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:16:52.025673 containerd[1552]: time="2025-12-16T13:16:52.025636808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:16:53.148588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509312850.mount: Deactivated successfully. Dec 16 13:16:53.508142 containerd[1552]: time="2025-12-16T13:16:53.507982748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:53.509224 containerd[1552]: time="2025-12-16T13:16:53.509191338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 16 13:16:53.509919 containerd[1552]: time="2025-12-16T13:16:53.509860248Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:53.512112 containerd[1552]: time="2025-12-16T13:16:53.512090118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:53.512794 containerd[1552]: time="2025-12-16T13:16:53.512773508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.48704063s" Dec 16 13:16:53.512879 containerd[1552]: time="2025-12-16T13:16:53.512864388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:16:53.513750 containerd[1552]: time="2025-12-16T13:16:53.513613108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:16:54.140361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909627016.mount: Deactivated successfully. Dec 16 13:16:54.773914 containerd[1552]: time="2025-12-16T13:16:54.773667238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:54.774774 containerd[1552]: time="2025-12-16T13:16:54.774708538Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 16 13:16:54.775462 containerd[1552]: time="2025-12-16T13:16:54.775437198Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:54.781260 containerd[1552]: time="2025-12-16T13:16:54.781065898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:54.782869 containerd[1552]: time="2025-12-16T13:16:54.782843948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.26919084s" Dec 16 13:16:54.782920 containerd[1552]: time="2025-12-16T13:16:54.782870108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:16:54.783486 containerd[1552]: time="2025-12-16T13:16:54.783456018Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:16:55.349221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761587897.mount: Deactivated successfully. Dec 16 13:16:55.353336 containerd[1552]: time="2025-12-16T13:16:55.353289578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:16:55.353926 containerd[1552]: time="2025-12-16T13:16:55.353902408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:16:55.355327 containerd[1552]: time="2025-12-16T13:16:55.354393218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:16:55.355817 containerd[1552]: time="2025-12-16T13:16:55.355793228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:16:55.356463 containerd[1552]: time="2025-12-16T13:16:55.356441478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.95433ms" Dec 16 13:16:55.356532 containerd[1552]: time="2025-12-16T13:16:55.356518978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:16:55.357003 containerd[1552]: time="2025-12-16T13:16:55.356980538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:16:56.007687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376650859.mount: Deactivated successfully. Dec 16 13:16:57.386653 containerd[1552]: time="2025-12-16T13:16:57.386593678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:57.387890 containerd[1552]: time="2025-12-16T13:16:57.387672518Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 16 13:16:57.388532 containerd[1552]: time="2025-12-16T13:16:57.388503038Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:57.391103 containerd[1552]: time="2025-12-16T13:16:57.391070428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:16:57.392371 containerd[1552]: time="2025-12-16T13:16:57.392334018Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.03532834s" Dec 16 13:16:57.392418 containerd[1552]: time="2025-12-16T13:16:57.392374138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:16:59.678470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:16:59.679259 systemd[1]: kubelet.service: Consumed 198ms CPU time, 109M memory peak. Dec 16 13:16:59.681294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:16:59.708620 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Dec 16 13:16:59.708715 systemd[1]: Reloading... Dec 16 13:16:59.840627 zram_generator::config[2292]: No configuration found. Dec 16 13:17:00.066395 systemd[1]: Reloading finished in 357 ms. Dec 16 13:17:00.125226 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:17:00.125427 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:17:00.126025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:00.126137 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.1M memory peak. Dec 16 13:17:00.128010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:00.316318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:00.327213 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:17:00.365366 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:00.365366 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:17:00.365366 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:00.365786 kubelet[2346]: I1216 13:17:00.365437 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:17:00.799165 kubelet[2346]: I1216 13:17:00.799092 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:17:00.799165 kubelet[2346]: I1216 13:17:00.799121 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:17:00.799409 kubelet[2346]: I1216 13:17:00.799382 2346 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:17:00.827511 kubelet[2346]: E1216 13:17:00.827471 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.20.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.20.218:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:00.828443 kubelet[2346]: I1216 13:17:00.828427 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:17:00.837281 kubelet[2346]: I1216 13:17:00.837245 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:17:00.842583 kubelet[2346]: I1216 13:17:00.842337 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:17:00.844035 kubelet[2346]: I1216 13:17:00.843963 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:17:00.844403 kubelet[2346]: I1216 13:17:00.844222 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-20-218","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:17:00.844552 kubelet[2346]: I1216 13:17:00.844411 2346 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:17:00.844552 kubelet[2346]: I1216 13:17:00.844424 2346 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:17:00.844552 kubelet[2346]: I1216 13:17:00.844548 2346 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:00.848908 kubelet[2346]: I1216 13:17:00.848816 2346 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:17:00.848908 kubelet[2346]: I1216 13:17:00.848861 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:17:00.848908 kubelet[2346]: I1216 13:17:00.848884 2346 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:17:00.848908 kubelet[2346]: I1216 13:17:00.848896 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:17:00.856419 kubelet[2346]: W1216 13:17:00.856278 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.20.218:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-20-218&limit=500&resourceVersion=0": dial tcp 172.232.20.218:6443: connect: connection refused Dec 16 13:17:00.856539 kubelet[2346]: E1216 13:17:00.856519 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.20.218:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-20-218&limit=500&resourceVersion=0\": dial tcp 172.232.20.218:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:00.856956 kubelet[2346]: I1216 13:17:00.856689 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:17:00.857133 kubelet[2346]: I1216 13:17:00.857085 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:17:00.858486 kubelet[2346]: W1216 13:17:00.858190 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:17:00.858486 kubelet[2346]: W1216 13:17:00.858383 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.20.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.20.218:6443: connect: connection refused Dec 16 13:17:00.858486 kubelet[2346]: E1216 13:17:00.858422 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.20.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.20.218:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:00.860554 kubelet[2346]: I1216 13:17:00.860521 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:17:00.860554 kubelet[2346]: I1216 13:17:00.860574 2346 server.go:1287] "Started kubelet" Dec 16 13:17:00.863661 kubelet[2346]: I1216 13:17:00.863628 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:17:00.867420 kubelet[2346]: E1216 13:17:00.866298 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.20.218:6443/api/v1/namespaces/default/events\": dial tcp 172.232.20.218:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-20-218.1881b4870271bedc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-20-218,UID:172-232-20-218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-20-218,},FirstTimestamp:2025-12-16 13:17:00.860538588 +0000 UTC m=+0.528902741,LastTimestamp:2025-12-16 13:17:00.860538588 +0000 UTC m=+0.528902741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-20-218,}" Dec 16 13:17:00.871659 kubelet[2346]: E1216 13:17:00.870377 2346 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:17:00.871659 kubelet[2346]: I1216 13:17:00.870423 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:17:00.871659 kubelet[2346]: I1216 13:17:00.870958 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:17:00.871659 kubelet[2346]: E1216 13:17:00.871150 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-20-218\" not found" Dec 16 13:17:00.871659 kubelet[2346]: I1216 13:17:00.871302 2346 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:17:00.872368 kubelet[2346]: I1216 13:17:00.872331 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:17:00.872640 kubelet[2346]: I1216 13:17:00.872625 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:17:00.872876 kubelet[2346]: I1216 13:17:00.872860 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:17:00.873871 kubelet[2346]: I1216 13:17:00.873843 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:17:00.873918 kubelet[2346]: I1216 13:17:00.873895 2346 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:17:00.875576 kubelet[2346]: I1216 13:17:00.875543 2346 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:17:00.875728 kubelet[2346]: I1216 13:17:00.875712 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:17:00.876382 kubelet[2346]: E1216 13:17:00.876354 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.20.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-20-218?timeout=10s\": dial tcp 172.232.20.218:6443: connect: connection refused" interval="200ms" Dec 16 13:17:00.877525 kubelet[2346]: W1216 13:17:00.877496 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.20.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.20.218:6443: connect: connection refused Dec 16 13:17:00.877786 kubelet[2346]: E1216 13:17:00.877768 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.20.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.20.218:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:00.877996 kubelet[2346]: I1216 13:17:00.877983 2346 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:17:00.886544 kubelet[2346]: I1216 13:17:00.886516 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:17:00.888100 kubelet[2346]: I1216 13:17:00.888085 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:17:00.888167 kubelet[2346]: I1216 13:17:00.888158 2346 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:17:00.888230 kubelet[2346]: I1216 13:17:00.888219 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:17:00.888275 kubelet[2346]: I1216 13:17:00.888268 2346 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:17:00.888360 kubelet[2346]: E1216 13:17:00.888345 2346 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:17:00.896647 kubelet[2346]: W1216 13:17:00.896616 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.20.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.20.218:6443: connect: connection refused Dec 16 13:17:00.896731 kubelet[2346]: E1216 13:17:00.896716 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.20.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.20.218:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:00.905260 kubelet[2346]: I1216 13:17:00.905234 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:17:00.905260 kubelet[2346]: I1216 13:17:00.905254 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:17:00.905351 kubelet[2346]: I1216 13:17:00.905270 2346 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:00.907054 kubelet[2346]: I1216 13:17:00.907034 2346 policy_none.go:49] "None policy: Start" Dec 16 13:17:00.907054 kubelet[2346]: I1216 13:17:00.907056 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:17:00.907219 kubelet[2346]: I1216 13:17:00.907068 2346 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:17:00.916174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:17:00.931719 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:17:00.947411 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:17:00.949339 kubelet[2346]: I1216 13:17:00.949322 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:17:00.950044 kubelet[2346]: I1216 13:17:00.950030 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:17:00.950130 kubelet[2346]: I1216 13:17:00.950100 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:17:00.950623 kubelet[2346]: I1216 13:17:00.950611 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:17:00.952118 kubelet[2346]: E1216 13:17:00.951870 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:17:00.952170 kubelet[2346]: E1216 13:17:00.952122 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-20-218\" not found" Dec 16 13:17:00.999321 systemd[1]: Created slice kubepods-burstable-podbaaf86d1c8052220b06e3f82af24126a.slice - libcontainer container kubepods-burstable-podbaaf86d1c8052220b06e3f82af24126a.slice. Dec 16 13:17:01.008441 kubelet[2346]: E1216 13:17:01.008401 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.011641 systemd[1]: Created slice kubepods-burstable-poda9b1d45f418eb216cc0b8904df537af2.slice - libcontainer container kubepods-burstable-poda9b1d45f418eb216cc0b8904df537af2.slice. Dec 16 13:17:01.013541 kubelet[2346]: E1216 13:17:01.013521 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.022332 systemd[1]: Created slice kubepods-burstable-pod654f437cb7e673cd8c7007124b342e1c.slice - libcontainer container kubepods-burstable-pod654f437cb7e673cd8c7007124b342e1c.slice. Dec 16 13:17:01.024294 kubelet[2346]: E1216 13:17:01.024273 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.052191 kubelet[2346]: I1216 13:17:01.051906 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-232-20-218" Dec 16 13:17:01.052364 kubelet[2346]: E1216 13:17:01.052290 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.20.218:6443/api/v1/nodes\": dial tcp 172.232.20.218:6443: connect: connection refused" node="172-232-20-218" Dec 16 13:17:01.075155 kubelet[2346]: I1216 13:17:01.075076 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baaf86d1c8052220b06e3f82af24126a-kubeconfig\") pod \"kube-scheduler-172-232-20-218\" (UID: \"baaf86d1c8052220b06e3f82af24126a\") " pod="kube-system/kube-scheduler-172-232-20-218" Dec 16 13:17:01.077191 kubelet[2346]: E1216 13:17:01.077152 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.20.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-20-218?timeout=10s\": dial tcp 172.232.20.218:6443: connect: connection refused" interval="400ms" Dec 16 13:17:01.176014 kubelet[2346]: I1216 13:17:01.175753 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-flexvolume-dir\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:01.176014 kubelet[2346]: I1216 13:17:01.176015 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-k8s-certs\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:01.176184 kubelet[2346]: I1216 13:17:01.176035 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-kubeconfig\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:01.176184 kubelet[2346]: I1216 13:17:01.176053 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:01.176184 kubelet[2346]: I1216 13:17:01.176092 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-k8s-certs\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:01.176184 kubelet[2346]: I1216 13:17:01.176110 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-ca-certs\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:01.176184 kubelet[2346]: I1216 13:17:01.176125 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-ca-certs\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:01.176362 kubelet[2346]: I1216 13:17:01.176139 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:01.254751 kubelet[2346]: I1216 13:17:01.254718 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-232-20-218" Dec 16 13:17:01.254990 kubelet[2346]: E1216 13:17:01.254970 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.20.218:6443/api/v1/nodes\": dial tcp 172.232.20.218:6443: connect: connection refused" node="172-232-20-218" Dec 16 13:17:01.309204 kubelet[2346]: E1216 13:17:01.309091 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.309910 containerd[1552]: time="2025-12-16T13:17:01.309860728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-20-218,Uid:baaf86d1c8052220b06e3f82af24126a,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:01.314132 kubelet[2346]: E1216 13:17:01.314073 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.314531 containerd[1552]: time="2025-12-16T13:17:01.314509058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-20-218,Uid:a9b1d45f418eb216cc0b8904df537af2,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:01.325134 kubelet[2346]: E1216 13:17:01.324946 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.327109 containerd[1552]: time="2025-12-16T13:17:01.325430318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-20-218,Uid:654f437cb7e673cd8c7007124b342e1c,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:01.332780 containerd[1552]: time="2025-12-16T13:17:01.332759008Z" level=info msg="connecting to shim 6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51" address="unix:///run/containerd/s/1f4ebf57fcd51d94fdfca9c8160acef730a3ee3cfa69fcb8c5a101d62c266a29" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:01.351353 containerd[1552]: time="2025-12-16T13:17:01.351319398Z" level=info msg="connecting to shim f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27" address="unix:///run/containerd/s/8466597d8b3cac01922ebda727346b2a0fa975fdf663a4390a1b85ebab9b778c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:01.365517 containerd[1552]: time="2025-12-16T13:17:01.365481308Z" level=info msg="connecting to shim 434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42" address="unix:///run/containerd/s/5e6b4c7a15a4b4a42b76c20a73e01a2b46f4b83dfe5e9dfdb57273d7bc4642fb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:01.383701 systemd[1]: Started cri-containerd-6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51.scope - libcontainer container 6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51. Dec 16 13:17:01.399750 systemd[1]: Started cri-containerd-f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27.scope - libcontainer container f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27. Dec 16 13:17:01.416693 systemd[1]: Started cri-containerd-434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42.scope - libcontainer container 434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42. Dec 16 13:17:01.474017 containerd[1552]: time="2025-12-16T13:17:01.473969428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-20-218,Uid:a9b1d45f418eb216cc0b8904df537af2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27\"" Dec 16 13:17:01.475323 kubelet[2346]: E1216 13:17:01.475303 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.480175 kubelet[2346]: E1216 13:17:01.479977 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.20.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-20-218?timeout=10s\": dial tcp 172.232.20.218:6443: connect: connection refused" interval="800ms" Dec 16 13:17:01.482131 containerd[1552]: time="2025-12-16T13:17:01.482109388Z" level=info msg="CreateContainer within sandbox \"f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:17:01.494441 containerd[1552]: time="2025-12-16T13:17:01.494409418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-20-218,Uid:654f437cb7e673cd8c7007124b342e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42\"" Dec 16 13:17:01.495779 kubelet[2346]: E1216 13:17:01.495666 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.498244 containerd[1552]: time="2025-12-16T13:17:01.497895208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-20-218,Uid:baaf86d1c8052220b06e3f82af24126a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51\"" Dec 16 13:17:01.498402 containerd[1552]: time="2025-12-16T13:17:01.498371298Z" level=info msg="CreateContainer within sandbox \"434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:17:01.499259 containerd[1552]: time="2025-12-16T13:17:01.499232948Z" level=info msg="Container c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:01.499579 kubelet[2346]: E1216 13:17:01.499551 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.502054 containerd[1552]: time="2025-12-16T13:17:01.502004908Z" level=info msg="CreateContainer within sandbox \"6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:17:01.508043 containerd[1552]: time="2025-12-16T13:17:01.508016108Z" level=info msg="CreateContainer within sandbox \"f3d283d1c836e2870aea6063fe8285eeda7a493be62862442e3c82235be66d27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92\"" Dec 16 13:17:01.509852 containerd[1552]: time="2025-12-16T13:17:01.509073678Z" level=info msg="Container b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:01.509852 containerd[1552]: time="2025-12-16T13:17:01.509122928Z" level=info msg="StartContainer for \"c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92\"" Dec 16 13:17:01.511054 containerd[1552]: time="2025-12-16T13:17:01.511032078Z" level=info msg="Container 4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:01.511528 containerd[1552]: time="2025-12-16T13:17:01.511491088Z" level=info msg="connecting to shim c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92" address="unix:///run/containerd/s/8466597d8b3cac01922ebda727346b2a0fa975fdf663a4390a1b85ebab9b778c" protocol=ttrpc version=3 Dec 16 13:17:01.514541 containerd[1552]: time="2025-12-16T13:17:01.514493798Z" level=info msg="CreateContainer within sandbox \"434eb4383e37016855ccc964439313196e17e712921a57972ac1a764d6328c42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02\"" Dec 16 13:17:01.515091 containerd[1552]: time="2025-12-16T13:17:01.514822378Z" level=info msg="StartContainer for \"b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02\"" Dec 16 13:17:01.520674 containerd[1552]: time="2025-12-16T13:17:01.519763368Z" level=info msg="connecting to shim b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02" address="unix:///run/containerd/s/5e6b4c7a15a4b4a42b76c20a73e01a2b46f4b83dfe5e9dfdb57273d7bc4642fb" protocol=ttrpc version=3 Dec 16 13:17:01.523366 containerd[1552]: time="2025-12-16T13:17:01.523322818Z" level=info msg="CreateContainer within sandbox \"6047f4eafc0e8f70d0a8f7cd177c84dfb9ac8084e72332dcb92dda69cb837b51\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063\"" Dec 16 13:17:01.526515 containerd[1552]: time="2025-12-16T13:17:01.526487778Z" level=info msg="StartContainer for \"4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063\"" Dec 16 13:17:01.531445 containerd[1552]: time="2025-12-16T13:17:01.530640998Z" level=info msg="connecting to shim 4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063" address="unix:///run/containerd/s/1f4ebf57fcd51d94fdfca9c8160acef730a3ee3cfa69fcb8c5a101d62c266a29" protocol=ttrpc version=3 Dec 16 13:17:01.532680 systemd[1]: Started cri-containerd-c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92.scope - libcontainer container c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92. Dec 16 13:17:01.565672 systemd[1]: Started cri-containerd-b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02.scope - libcontainer container b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02. Dec 16 13:17:01.577633 systemd[1]: Started cri-containerd-4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063.scope - libcontainer container 4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063. Dec 16 13:17:01.603672 containerd[1552]: time="2025-12-16T13:17:01.603501128Z" level=info msg="StartContainer for \"c885de663cc5835ee5212c40b81dc29463c0f28f978ee469af3a060d61a2ec92\" returns successfully" Dec 16 13:17:01.658795 kubelet[2346]: I1216 13:17:01.658760 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-232-20-218" Dec 16 13:17:01.667435 containerd[1552]: time="2025-12-16T13:17:01.667380468Z" level=info msg="StartContainer for \"b6b02e8719d95391fc9b13ab7113679883b341d9f80501f89828462da635fd02\" returns successfully" Dec 16 13:17:01.724845 containerd[1552]: time="2025-12-16T13:17:01.724801978Z" level=info msg="StartContainer for \"4277a32723ee380b86ffa130461c6ffd419a1cf87e434b853aeabaf56ca40063\" returns successfully" Dec 16 13:17:01.907580 kubelet[2346]: E1216 13:17:01.906369 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.908226 kubelet[2346]: E1216 13:17:01.908212 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.918440 kubelet[2346]: E1216 13:17:01.918372 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.918911 kubelet[2346]: E1216 13:17:01.918856 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:01.921590 kubelet[2346]: E1216 13:17:01.921481 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:01.921658 kubelet[2346]: E1216 13:17:01.921646 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:02.925132 kubelet[2346]: E1216 13:17:02.925094 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:02.925614 kubelet[2346]: E1216 13:17:02.925212 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:02.925965 kubelet[2346]: E1216 13:17:02.925942 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:02.926054 kubelet[2346]: E1216 13:17:02.926032 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:02.980927 kubelet[2346]: E1216 13:17:02.980897 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-20-218\" not found" node="172-232-20-218" Dec 16 13:17:03.103225 kubelet[2346]: I1216 13:17:03.103156 2346 kubelet_node_status.go:78] "Successfully registered node" node="172-232-20-218" Dec 16 13:17:03.103225 kubelet[2346]: E1216 13:17:03.103185 2346 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-232-20-218\": node \"172-232-20-218\" not found" Dec 16 13:17:03.171864 kubelet[2346]: I1216 13:17:03.171836 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-20-218" Dec 16 13:17:03.181604 kubelet[2346]: E1216 13:17:03.181335 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-20-218\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-20-218" Dec 16 13:17:03.181604 kubelet[2346]: I1216 13:17:03.181352 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:03.184959 kubelet[2346]: E1216 13:17:03.184932 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-20-218\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:03.184959 kubelet[2346]: I1216 13:17:03.184954 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:03.188256 kubelet[2346]: E1216 13:17:03.188203 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-20-218\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:03.860307 kubelet[2346]: I1216 13:17:03.860239 2346 apiserver.go:52] "Watching apiserver" Dec 16 13:17:03.874716 kubelet[2346]: I1216 13:17:03.874669 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:17:04.773455 systemd[1]: Reload requested from client PID 2615 ('systemctl') (unit session-7.scope)... Dec 16 13:17:04.773474 systemd[1]: Reloading... Dec 16 13:17:04.873606 zram_generator::config[2659]: No configuration found. Dec 16 13:17:05.095097 systemd[1]: Reloading finished in 321 ms. Dec 16 13:17:05.120917 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:05.136841 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:17:05.137296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:05.137353 systemd[1]: kubelet.service: Consumed 910ms CPU time, 129.7M memory peak. Dec 16 13:17:05.140159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:05.310188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:05.319063 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:17:05.363342 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:05.363342 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:17:05.363342 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:05.363709 kubelet[2710]: I1216 13:17:05.363388 2710 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:17:05.371769 kubelet[2710]: I1216 13:17:05.371064 2710 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:17:05.371769 kubelet[2710]: I1216 13:17:05.371083 2710 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:17:05.371769 kubelet[2710]: I1216 13:17:05.371507 2710 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:17:05.373285 kubelet[2710]: I1216 13:17:05.373265 2710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:17:05.375498 kubelet[2710]: I1216 13:17:05.375478 2710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:17:05.378372 kubelet[2710]: I1216 13:17:05.378357 2710 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:17:05.382248 kubelet[2710]: I1216 13:17:05.382235 2710 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:17:05.382474 kubelet[2710]: I1216 13:17:05.382449 2710 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:17:05.382623 kubelet[2710]: I1216 13:17:05.382474 2710 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-20-218","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:17:05.382740 kubelet[2710]: I1216 13:17:05.382632 2710 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:17:05.382740 kubelet[2710]: I1216 13:17:05.382641 2710 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:17:05.382740 kubelet[2710]: I1216 13:17:05.382682 2710 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:05.382842 kubelet[2710]: I1216 13:17:05.382829 2710 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:17:05.382868 kubelet[2710]: I1216 13:17:05.382852 2710 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:17:05.383233 kubelet[2710]: I1216 13:17:05.383215 2710 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:17:05.383233 kubelet[2710]: I1216 13:17:05.383234 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:17:05.386832 kubelet[2710]: I1216 13:17:05.386808 2710 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:17:05.387103 kubelet[2710]: I1216 13:17:05.387082 2710 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:17:05.387422 kubelet[2710]: I1216 13:17:05.387404 2710 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:17:05.387452 kubelet[2710]: I1216 13:17:05.387430 2710 server.go:1287] "Started kubelet" Dec 16 13:17:05.388122 kubelet[2710]: I1216 13:17:05.388073 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:17:05.388425 kubelet[2710]: I1216 13:17:05.388409 2710 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:17:05.395002 kubelet[2710]: I1216 13:17:05.394961 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:17:05.400581 kubelet[2710]: I1216 13:17:05.399173 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:17:05.401651 kubelet[2710]: I1216 13:17:05.401638 2710 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:17:05.401933 kubelet[2710]: E1216 13:17:05.401885 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-20-218\" not found" Dec 16 13:17:05.402695 kubelet[2710]: I1216 13:17:05.402681 2710 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:17:05.403206 kubelet[2710]: I1216 13:17:05.403031 2710 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:17:05.405125 kubelet[2710]: I1216 13:17:05.398790 2710 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:17:05.406096 kubelet[2710]: I1216 13:17:05.406076 2710 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:17:05.408773 kubelet[2710]: I1216 13:17:05.408745 2710 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:17:05.408835 kubelet[2710]: I1216 13:17:05.408813 2710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:17:05.411189 kubelet[2710]: E1216 13:17:05.411123 2710 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:17:05.412286 kubelet[2710]: I1216 13:17:05.411735 2710 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:17:05.416590 kubelet[2710]: I1216 13:17:05.416540 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:17:05.417831 kubelet[2710]: I1216 13:17:05.417807 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:17:05.417927 kubelet[2710]: I1216 13:17:05.417916 2710 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:17:05.418009 kubelet[2710]: I1216 13:17:05.417990 2710 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:17:05.418057 kubelet[2710]: I1216 13:17:05.418049 2710 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:17:05.418181 kubelet[2710]: E1216 13:17:05.418153 2710 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:17:05.465766 kubelet[2710]: I1216 13:17:05.465746 2710 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:17:05.465766 kubelet[2710]: I1216 13:17:05.465760 2710 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:17:05.465924 kubelet[2710]: I1216 13:17:05.465777 2710 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:05.465924 kubelet[2710]: I1216 13:17:05.465895 2710 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:17:05.465924 kubelet[2710]: I1216 13:17:05.465905 2710 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:17:05.465924 kubelet[2710]: I1216 13:17:05.465919 2710 policy_none.go:49] "None policy: Start" Dec 16 13:17:05.466050 kubelet[2710]: I1216 13:17:05.465927 2710 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:17:05.466050 kubelet[2710]: I1216 13:17:05.465937 2710 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:17:05.466050 kubelet[2710]: I1216 13:17:05.466016 2710 state_mem.go:75] "Updated machine memory state" Dec 16 13:17:05.470126 kubelet[2710]: I1216 13:17:05.470094 2710 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:17:05.470257 kubelet[2710]: I1216 13:17:05.470236 2710 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:17:05.470301 kubelet[2710]: I1216 13:17:05.470253 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:17:05.471306 kubelet[2710]: E1216 13:17:05.471277 2710 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:17:05.471655 kubelet[2710]: I1216 13:17:05.471642 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:17:05.519531 kubelet[2710]: I1216 13:17:05.519342 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.519531 kubelet[2710]: I1216 13:17:05.519429 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-20-218" Dec 16 13:17:05.519961 kubelet[2710]: I1216 13:17:05.519923 2710 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:05.573495 kubelet[2710]: I1216 13:17:05.573476 2710 kubelet_node_status.go:75] "Attempting to register node" node="172-232-20-218" Dec 16 13:17:05.580096 kubelet[2710]: I1216 13:17:05.580034 2710 kubelet_node_status.go:124] "Node was previously registered" node="172-232-20-218" Dec 16 13:17:05.580300 kubelet[2710]: I1216 13:17:05.580170 2710 kubelet_node_status.go:78] "Successfully registered node" node="172-232-20-218" Dec 16 13:17:05.604377 kubelet[2710]: I1216 13:17:05.604336 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-k8s-certs\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:05.604377 kubelet[2710]: I1216 13:17:05.604371 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-flexvolume-dir\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.604377 kubelet[2710]: I1216 13:17:05.604394 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-k8s-certs\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.604601 kubelet[2710]: I1216 13:17:05.604411 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.604601 kubelet[2710]: I1216 13:17:05.604429 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-ca-certs\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:05.604601 kubelet[2710]: I1216 13:17:05.604444 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b1d45f418eb216cc0b8904df537af2-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-20-218\" (UID: \"a9b1d45f418eb216cc0b8904df537af2\") " pod="kube-system/kube-apiserver-172-232-20-218" Dec 16 13:17:05.604601 kubelet[2710]: I1216 13:17:05.604459 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-ca-certs\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.604601 kubelet[2710]: I1216 13:17:05.604474 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/654f437cb7e673cd8c7007124b342e1c-kubeconfig\") pod \"kube-controller-manager-172-232-20-218\" (UID: \"654f437cb7e673cd8c7007124b342e1c\") " pod="kube-system/kube-controller-manager-172-232-20-218" Dec 16 13:17:05.604706 kubelet[2710]: I1216 13:17:05.604490 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/baaf86d1c8052220b06e3f82af24126a-kubeconfig\") pod \"kube-scheduler-172-232-20-218\" (UID: \"baaf86d1c8052220b06e3f82af24126a\") " pod="kube-system/kube-scheduler-172-232-20-218" Dec 16 13:17:05.826355 kubelet[2710]: E1216 13:17:05.825403 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:05.826507 kubelet[2710]: E1216 13:17:05.825524 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:05.826588 kubelet[2710]: E1216 13:17:05.825608 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:06.385334 kubelet[2710]: I1216 13:17:06.385288 2710 apiserver.go:52] "Watching apiserver" Dec 16 13:17:06.404695 kubelet[2710]: I1216 13:17:06.404656 2710 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:17:06.440814 kubelet[2710]: I1216 13:17:06.440759 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-20-218" podStartSLOduration=1.440740718 podStartE2EDuration="1.440740718s" podCreationTimestamp="2025-12-16 13:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:06.431753158 +0000 UTC m=+1.108040861" watchObservedRunningTime="2025-12-16 13:17:06.440740718 +0000 UTC m=+1.117028421" Dec 16 13:17:06.450529 kubelet[2710]: E1216 13:17:06.450496 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:06.453916 kubelet[2710]: E1216 13:17:06.453891 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:06.454440 kubelet[2710]: E1216 13:17:06.454416 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:06.457827 kubelet[2710]: I1216 13:17:06.457632 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-20-218" podStartSLOduration=1.457623498 podStartE2EDuration="1.457623498s" podCreationTimestamp="2025-12-16 13:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:06.444170268 +0000 UTC m=+1.120457971" watchObservedRunningTime="2025-12-16 13:17:06.457623498 +0000 UTC m=+1.133911201" Dec 16 13:17:06.468422 kubelet[2710]: I1216 13:17:06.468115 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-20-218" podStartSLOduration=1.468099498 podStartE2EDuration="1.468099498s" podCreationTimestamp="2025-12-16 13:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:06.458127938 +0000 UTC m=+1.134415641" watchObservedRunningTime="2025-12-16 13:17:06.468099498 +0000 UTC m=+1.144387211" Dec 16 13:17:07.452389 kubelet[2710]: E1216 13:17:07.452342 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:07.452841 kubelet[2710]: E1216 13:17:07.452822 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:07.849482 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:17:09.715231 kubelet[2710]: E1216 13:17:09.714978 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:11.676243 kubelet[2710]: I1216 13:17:11.676149 2710 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:17:11.676730 containerd[1552]: time="2025-12-16T13:17:11.676704493Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:17:11.677009 kubelet[2710]: I1216 13:17:11.676910 2710 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:17:11.843220 systemd[1]: Created slice kubepods-besteffort-pod169d6596_4607_489c_9fdd_654a69234e8b.slice - libcontainer container kubepods-besteffort-pod169d6596_4607_489c_9fdd_654a69234e8b.slice. Dec 16 13:17:11.846667 kubelet[2710]: I1216 13:17:11.846603 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/169d6596-4607-489c-9fdd-654a69234e8b-kube-proxy\") pod \"kube-proxy-zg8xl\" (UID: \"169d6596-4607-489c-9fdd-654a69234e8b\") " pod="kube-system/kube-proxy-zg8xl" Dec 16 13:17:11.846667 kubelet[2710]: I1216 13:17:11.846632 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169d6596-4607-489c-9fdd-654a69234e8b-lib-modules\") pod \"kube-proxy-zg8xl\" (UID: \"169d6596-4607-489c-9fdd-654a69234e8b\") " pod="kube-system/kube-proxy-zg8xl" Dec 16 13:17:11.846877 kubelet[2710]: I1216 13:17:11.846782 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxb2h\" (UniqueName: \"kubernetes.io/projected/169d6596-4607-489c-9fdd-654a69234e8b-kube-api-access-mxb2h\") pod \"kube-proxy-zg8xl\" (UID: \"169d6596-4607-489c-9fdd-654a69234e8b\") " pod="kube-system/kube-proxy-zg8xl" Dec 16 13:17:11.846877 kubelet[2710]: I1216 13:17:11.846802 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169d6596-4607-489c-9fdd-654a69234e8b-xtables-lock\") pod \"kube-proxy-zg8xl\" (UID: \"169d6596-4607-489c-9fdd-654a69234e8b\") " pod="kube-system/kube-proxy-zg8xl" Dec 16 13:17:11.974424 kubelet[2710]: E1216 13:17:11.973204 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.151750 kubelet[2710]: E1216 13:17:12.151711 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.152757 containerd[1552]: time="2025-12-16T13:17:12.152690242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg8xl,Uid:169d6596-4607-489c-9fdd-654a69234e8b,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:12.171033 containerd[1552]: time="2025-12-16T13:17:12.171003229Z" level=info msg="connecting to shim 1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69" address="unix:///run/containerd/s/3a41304d3e302e73a90c4a8518e003daa9562fd0f3f551df4cdcb43b89707b4f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:12.207700 systemd[1]: Started cri-containerd-1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69.scope - libcontainer container 1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69. Dec 16 13:17:12.237345 containerd[1552]: time="2025-12-16T13:17:12.237088243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg8xl,Uid:169d6596-4607-489c-9fdd-654a69234e8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69\"" Dec 16 13:17:12.238267 kubelet[2710]: E1216 13:17:12.238165 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.241101 containerd[1552]: time="2025-12-16T13:17:12.241079763Z" level=info msg="CreateContainer within sandbox \"1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:17:12.249770 containerd[1552]: time="2025-12-16T13:17:12.249746790Z" level=info msg="Container c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:12.256782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659453358.mount: Deactivated successfully. Dec 16 13:17:12.260621 containerd[1552]: time="2025-12-16T13:17:12.260592409Z" level=info msg="CreateContainer within sandbox \"1188a6ab13616acda59ec39dec7b78b1a3eb3cdcfd502b50384de7c456690b69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080\"" Dec 16 13:17:12.261251 containerd[1552]: time="2025-12-16T13:17:12.261234807Z" level=info msg="StartContainer for \"c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080\"" Dec 16 13:17:12.262588 containerd[1552]: time="2025-12-16T13:17:12.262473745Z" level=info msg="connecting to shim c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080" address="unix:///run/containerd/s/3a41304d3e302e73a90c4a8518e003daa9562fd0f3f551df4cdcb43b89707b4f" protocol=ttrpc version=3 Dec 16 13:17:12.284810 systemd[1]: Started cri-containerd-c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080.scope - libcontainer container c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080. Dec 16 13:17:12.380093 containerd[1552]: time="2025-12-16T13:17:12.380026412Z" level=info msg="StartContainer for \"c7cd990f636f7834b7f782964795be5f966c5201b09663436e79ca29dc3fe080\" returns successfully" Dec 16 13:17:12.384585 kubelet[2710]: E1216 13:17:12.384335 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.464997 kubelet[2710]: E1216 13:17:12.464963 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.465749 kubelet[2710]: E1216 13:17:12.465703 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.465943 kubelet[2710]: E1216 13:17:12.465921 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:12.760609 kubelet[2710]: I1216 13:17:12.760242 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zg8xl" podStartSLOduration=1.760216354 podStartE2EDuration="1.760216354s" podCreationTimestamp="2025-12-16 13:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:12.496021185 +0000 UTC m=+7.172308888" watchObservedRunningTime="2025-12-16 13:17:12.760216354 +0000 UTC m=+7.436504057" Dec 16 13:17:12.770311 systemd[1]: Created slice kubepods-besteffort-pod83d38a14_43ad_4d20_b420_83a41397a418.slice - libcontainer container kubepods-besteffort-pod83d38a14_43ad_4d20_b420_83a41397a418.slice. Dec 16 13:17:12.851471 kubelet[2710]: I1216 13:17:12.851435 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83d38a14-43ad-4d20-b420-83a41397a418-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lmplp\" (UID: \"83d38a14-43ad-4d20-b420-83a41397a418\") " pod="tigera-operator/tigera-operator-7dcd859c48-lmplp" Dec 16 13:17:12.851620 kubelet[2710]: I1216 13:17:12.851485 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rxx9\" (UniqueName: \"kubernetes.io/projected/83d38a14-43ad-4d20-b420-83a41397a418-kube-api-access-7rxx9\") pod \"tigera-operator-7dcd859c48-lmplp\" (UID: \"83d38a14-43ad-4d20-b420-83a41397a418\") " pod="tigera-operator/tigera-operator-7dcd859c48-lmplp" Dec 16 13:17:13.074045 containerd[1552]: time="2025-12-16T13:17:13.073935767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lmplp,Uid:83d38a14-43ad-4d20-b420-83a41397a418,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:17:13.091578 containerd[1552]: time="2025-12-16T13:17:13.090331356Z" level=info msg="connecting to shim fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08" address="unix:///run/containerd/s/d4cb98f9afef252fe2bdc60d7eea6d36267d1eb0500b9fcc1b4c8ff19058e81c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:13.119688 systemd[1]: Started cri-containerd-fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08.scope - libcontainer container fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08. Dec 16 13:17:13.171691 containerd[1552]: time="2025-12-16T13:17:13.171658511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lmplp,Uid:83d38a14-43ad-4d20-b420-83a41397a418,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08\"" Dec 16 13:17:13.173844 containerd[1552]: time="2025-12-16T13:17:13.173803346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:17:14.434675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062846298.mount: Deactivated successfully. Dec 16 13:17:15.131126 containerd[1552]: time="2025-12-16T13:17:15.131068000Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:15.132221 containerd[1552]: time="2025-12-16T13:17:15.132066415Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:17:15.132762 containerd[1552]: time="2025-12-16T13:17:15.132732855Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:15.134298 containerd[1552]: time="2025-12-16T13:17:15.134267143Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:15.134969 containerd[1552]: time="2025-12-16T13:17:15.134941473Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.961113538s" Dec 16 13:17:15.135030 containerd[1552]: time="2025-12-16T13:17:15.135017992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:17:15.137452 containerd[1552]: time="2025-12-16T13:17:15.136749117Z" level=info msg="CreateContainer within sandbox \"fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:17:15.142437 containerd[1552]: time="2025-12-16T13:17:15.142417714Z" level=info msg="Container 1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:15.153343 containerd[1552]: time="2025-12-16T13:17:15.153312426Z" level=info msg="CreateContainer within sandbox \"fb9f3351765b2c84add96f01a3d4b5f6789920f07a404596e8a36a92e0f61a08\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2\"" Dec 16 13:17:15.153876 containerd[1552]: time="2025-12-16T13:17:15.153839028Z" level=info msg="StartContainer for \"1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2\"" Dec 16 13:17:15.154891 containerd[1552]: time="2025-12-16T13:17:15.154802794Z" level=info msg="connecting to shim 1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2" address="unix:///run/containerd/s/d4cb98f9afef252fe2bdc60d7eea6d36267d1eb0500b9fcc1b4c8ff19058e81c" protocol=ttrpc version=3 Dec 16 13:17:15.179695 systemd[1]: Started cri-containerd-1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2.scope - libcontainer container 1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2. Dec 16 13:17:15.218539 containerd[1552]: time="2025-12-16T13:17:15.218485839Z" level=info msg="StartContainer for \"1867792b91e32ffc7ac28ef7a910e999bb44c64bbb2669bc329ddeb5bbb6aef2\" returns successfully" Dec 16 13:17:15.481765 kubelet[2710]: I1216 13:17:15.481581 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lmplp" podStartSLOduration=1.5185570369999999 podStartE2EDuration="3.481544935s" podCreationTimestamp="2025-12-16 13:17:12 +0000 UTC" firstStartedPulling="2025-12-16 13:17:13.172774463 +0000 UTC m=+7.849062166" lastFinishedPulling="2025-12-16 13:17:15.135762361 +0000 UTC m=+9.812050064" observedRunningTime="2025-12-16 13:17:15.48049165 +0000 UTC m=+10.156779363" watchObservedRunningTime="2025-12-16 13:17:15.481544935 +0000 UTC m=+10.157832638" Dec 16 13:17:19.719779 kubelet[2710]: E1216 13:17:19.719732 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:20.480726 kubelet[2710]: E1216 13:17:20.480693 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:20.558256 sudo[1792]: pam_unix(sudo:session): session closed for user root Dec 16 13:17:20.612348 sshd[1791]: Connection closed by 139.178.89.65 port 50754 Dec 16 13:17:20.614791 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:20.620339 systemd[1]: sshd@6-172.232.20.218:22-139.178.89.65:50754.service: Deactivated successfully. Dec 16 13:17:20.623931 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:17:20.624384 systemd[1]: session-7.scope: Consumed 4.056s CPU time, 228.9M memory peak. Dec 16 13:17:20.626544 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:17:20.628451 systemd-logind[1525]: Removed session 7. Dec 16 13:17:22.362286 update_engine[1528]: I20251216 13:17:22.361627 1528 update_attempter.cc:509] Updating boot flags... Dec 16 13:17:25.195346 systemd[1]: Created slice kubepods-besteffort-podf969c387_d3c0_42a3_bdc3_bbd1a4972c63.slice - libcontainer container kubepods-besteffort-podf969c387_d3c0_42a3_bdc3_bbd1a4972c63.slice. Dec 16 13:17:25.228701 kubelet[2710]: I1216 13:17:25.228668 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f969c387-d3c0-42a3-bdc3-bbd1a4972c63-typha-certs\") pod \"calico-typha-7f56c6b68f-xbqth\" (UID: \"f969c387-d3c0-42a3-bdc3-bbd1a4972c63\") " pod="calico-system/calico-typha-7f56c6b68f-xbqth" Dec 16 13:17:25.229201 kubelet[2710]: I1216 13:17:25.228705 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f969c387-d3c0-42a3-bdc3-bbd1a4972c63-tigera-ca-bundle\") pod \"calico-typha-7f56c6b68f-xbqth\" (UID: \"f969c387-d3c0-42a3-bdc3-bbd1a4972c63\") " pod="calico-system/calico-typha-7f56c6b68f-xbqth" Dec 16 13:17:25.229201 kubelet[2710]: I1216 13:17:25.228725 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpv4\" (UniqueName: \"kubernetes.io/projected/f969c387-d3c0-42a3-bdc3-bbd1a4972c63-kube-api-access-7jpv4\") pod \"calico-typha-7f56c6b68f-xbqth\" (UID: \"f969c387-d3c0-42a3-bdc3-bbd1a4972c63\") " pod="calico-system/calico-typha-7f56c6b68f-xbqth" Dec 16 13:17:25.390856 systemd[1]: Created slice kubepods-besteffort-pod00326b13_5e1c_47d9_b97c_f557ddabda7d.slice - libcontainer container kubepods-besteffort-pod00326b13_5e1c_47d9_b97c_f557ddabda7d.slice. Dec 16 13:17:25.430512 kubelet[2710]: I1216 13:17:25.430463 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-policysync\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430512 kubelet[2710]: I1216 13:17:25.430497 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-var-run-calico\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430512 kubelet[2710]: I1216 13:17:25.430514 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-lib-modules\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430837 kubelet[2710]: I1216 13:17:25.430529 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00326b13-5e1c-47d9-b97c-f557ddabda7d-tigera-ca-bundle\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430837 kubelet[2710]: I1216 13:17:25.430546 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-cni-net-dir\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430837 kubelet[2710]: I1216 13:17:25.430576 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-cni-log-dir\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430837 kubelet[2710]: I1216 13:17:25.430592 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-flexvol-driver-host\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430837 kubelet[2710]: I1216 13:17:25.430634 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-var-lib-calico\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430970 kubelet[2710]: I1216 13:17:25.430682 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-cni-bin-dir\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430970 kubelet[2710]: I1216 13:17:25.430701 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00326b13-5e1c-47d9-b97c-f557ddabda7d-node-certs\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430970 kubelet[2710]: I1216 13:17:25.430723 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00326b13-5e1c-47d9-b97c-f557ddabda7d-xtables-lock\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.430970 kubelet[2710]: I1216 13:17:25.430744 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8npp\" (UniqueName: \"kubernetes.io/projected/00326b13-5e1c-47d9-b97c-f557ddabda7d-kube-api-access-d8npp\") pod \"calico-node-sfwkq\" (UID: \"00326b13-5e1c-47d9-b97c-f557ddabda7d\") " pod="calico-system/calico-node-sfwkq" Dec 16 13:17:25.501881 kubelet[2710]: E1216 13:17:25.501504 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:25.503060 containerd[1552]: time="2025-12-16T13:17:25.503031818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f56c6b68f-xbqth,Uid:f969c387-d3c0-42a3-bdc3-bbd1a4972c63,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:25.525421 containerd[1552]: time="2025-12-16T13:17:25.525319588Z" level=info msg="connecting to shim 143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0" address="unix:///run/containerd/s/04b16bc63e1837e77115206fa4687afbb019d58ca9fa0115f560d0c14aeb19ac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537460 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.539590 kubelet[2710]: W1216 13:17:25.537482 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537508 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537724 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.539590 kubelet[2710]: W1216 13:17:25.537732 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537740 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537895 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.539590 kubelet[2710]: W1216 13:17:25.537902 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.537910 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.539590 kubelet[2710]: E1216 13:17:25.538304 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.539967 kubelet[2710]: W1216 13:17:25.538311 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.539967 kubelet[2710]: E1216 13:17:25.538320 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.540727 kubelet[2710]: E1216 13:17:25.540702 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.540727 kubelet[2710]: W1216 13:17:25.540721 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.540727 kubelet[2710]: E1216 13:17:25.540732 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.558537 kubelet[2710]: E1216 13:17:25.558510 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.558537 kubelet[2710]: W1216 13:17:25.558530 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.558630 kubelet[2710]: E1216 13:17:25.558544 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.577890 systemd[1]: Started cri-containerd-143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0.scope - libcontainer container 143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0. Dec 16 13:17:25.587522 kubelet[2710]: E1216 13:17:25.587394 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:25.613691 kubelet[2710]: E1216 13:17:25.613658 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.613858 kubelet[2710]: W1216 13:17:25.613788 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.613858 kubelet[2710]: E1216 13:17:25.613810 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.614125 kubelet[2710]: E1216 13:17:25.614091 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.614125 kubelet[2710]: W1216 13:17:25.614101 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.614240 kubelet[2710]: E1216 13:17:25.614110 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.614478 kubelet[2710]: E1216 13:17:25.614442 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.614478 kubelet[2710]: W1216 13:17:25.614453 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.614619 kubelet[2710]: E1216 13:17:25.614461 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.615136 kubelet[2710]: E1216 13:17:25.615041 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.615136 kubelet[2710]: W1216 13:17:25.615069 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.615136 kubelet[2710]: E1216 13:17:25.615078 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.615477 kubelet[2710]: E1216 13:17:25.615439 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.615477 kubelet[2710]: W1216 13:17:25.615449 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.615616 kubelet[2710]: E1216 13:17:25.615459 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.616213 kubelet[2710]: E1216 13:17:25.616156 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.616213 kubelet[2710]: W1216 13:17:25.616167 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.616213 kubelet[2710]: E1216 13:17:25.616175 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.616662 kubelet[2710]: E1216 13:17:25.616606 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.616662 kubelet[2710]: W1216 13:17:25.616616 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.616662 kubelet[2710]: E1216 13:17:25.616624 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.617247 kubelet[2710]: E1216 13:17:25.617148 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.617247 kubelet[2710]: W1216 13:17:25.617158 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.617247 kubelet[2710]: E1216 13:17:25.617166 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.617795 kubelet[2710]: E1216 13:17:25.617762 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.617795 kubelet[2710]: W1216 13:17:25.617773 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.617917 kubelet[2710]: E1216 13:17:25.617873 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.618373 kubelet[2710]: E1216 13:17:25.618362 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.618442 kubelet[2710]: W1216 13:17:25.618413 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.618487 kubelet[2710]: E1216 13:17:25.618477 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.619245 kubelet[2710]: E1216 13:17:25.619234 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.619364 kubelet[2710]: W1216 13:17:25.619287 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.619364 kubelet[2710]: E1216 13:17:25.619316 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.620156 kubelet[2710]: E1216 13:17:25.619926 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.620156 kubelet[2710]: W1216 13:17:25.619937 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.620232 kubelet[2710]: E1216 13:17:25.620221 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.620460 kubelet[2710]: E1216 13:17:25.620450 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.620516 kubelet[2710]: W1216 13:17:25.620507 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.620592 kubelet[2710]: E1216 13:17:25.620554 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.620801 kubelet[2710]: E1216 13:17:25.620791 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.620884 kubelet[2710]: W1216 13:17:25.620849 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.620884 kubelet[2710]: E1216 13:17:25.620860 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.621152 kubelet[2710]: E1216 13:17:25.621132 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.621586 kubelet[2710]: W1216 13:17:25.621536 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.621586 kubelet[2710]: E1216 13:17:25.621548 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.621829 kubelet[2710]: E1216 13:17:25.621818 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.621940 kubelet[2710]: W1216 13:17:25.621882 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.621940 kubelet[2710]: E1216 13:17:25.621895 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.622213 kubelet[2710]: E1216 13:17:25.622157 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.622213 kubelet[2710]: W1216 13:17:25.622168 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.622213 kubelet[2710]: E1216 13:17:25.622178 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.622478 kubelet[2710]: E1216 13:17:25.622426 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.622478 kubelet[2710]: W1216 13:17:25.622435 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.622478 kubelet[2710]: E1216 13:17:25.622443 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.622782 kubelet[2710]: E1216 13:17:25.622727 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.622782 kubelet[2710]: W1216 13:17:25.622737 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.622782 kubelet[2710]: E1216 13:17:25.622745 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.623091 kubelet[2710]: E1216 13:17:25.623022 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.623091 kubelet[2710]: W1216 13:17:25.623032 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.623091 kubelet[2710]: E1216 13:17:25.623040 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.632837 kubelet[2710]: E1216 13:17:25.632658 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.632837 kubelet[2710]: W1216 13:17:25.632675 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.632837 kubelet[2710]: E1216 13:17:25.632686 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.632837 kubelet[2710]: I1216 13:17:25.632703 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5fcd65a8-90ec-479e-a0e4-707e3c32e3f8-kubelet-dir\") pod \"csi-node-driver-dfzr8\" (UID: \"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8\") " pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:25.634604 kubelet[2710]: E1216 13:17:25.633327 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.635133 kubelet[2710]: W1216 13:17:25.634747 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.635133 kubelet[2710]: E1216 13:17:25.634796 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.635133 kubelet[2710]: I1216 13:17:25.634826 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5fcd65a8-90ec-479e-a0e4-707e3c32e3f8-registration-dir\") pod \"csi-node-driver-dfzr8\" (UID: \"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8\") " pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:25.635434 kubelet[2710]: E1216 13:17:25.635417 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.635486 kubelet[2710]: W1216 13:17:25.635475 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.635669 kubelet[2710]: E1216 13:17:25.635551 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.635712 kubelet[2710]: I1216 13:17:25.635675 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5fcd65a8-90ec-479e-a0e4-707e3c32e3f8-socket-dir\") pod \"csi-node-driver-dfzr8\" (UID: \"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8\") " pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:25.635993 kubelet[2710]: E1216 13:17:25.635981 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.636047 kubelet[2710]: W1216 13:17:25.636037 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.636141 kubelet[2710]: E1216 13:17:25.636113 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.636473 kubelet[2710]: E1216 13:17:25.636461 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.636524 kubelet[2710]: W1216 13:17:25.636514 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.636610 kubelet[2710]: E1216 13:17:25.636597 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.637090 kubelet[2710]: E1216 13:17:25.637079 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.637141 kubelet[2710]: W1216 13:17:25.637131 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.637193 kubelet[2710]: E1216 13:17:25.637183 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.637257 kubelet[2710]: I1216 13:17:25.637243 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5fcd65a8-90ec-479e-a0e4-707e3c32e3f8-varrun\") pod \"csi-node-driver-dfzr8\" (UID: \"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8\") " pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:25.637487 kubelet[2710]: E1216 13:17:25.637440 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.637487 kubelet[2710]: W1216 13:17:25.637453 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.637551 kubelet[2710]: E1216 13:17:25.637491 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.637879 kubelet[2710]: E1216 13:17:25.637775 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.637879 kubelet[2710]: W1216 13:17:25.637874 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.637934 kubelet[2710]: E1216 13:17:25.637886 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.638621 kubelet[2710]: E1216 13:17:25.638586 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.638621 kubelet[2710]: W1216 13:17:25.638614 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.638706 kubelet[2710]: E1216 13:17:25.638632 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.639507 kubelet[2710]: E1216 13:17:25.639360 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.639507 kubelet[2710]: W1216 13:17:25.639373 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.639507 kubelet[2710]: E1216 13:17:25.639384 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.639828 kubelet[2710]: E1216 13:17:25.639817 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.639888 kubelet[2710]: W1216 13:17:25.639876 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.639938 kubelet[2710]: E1216 13:17:25.639927 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.641790 kubelet[2710]: E1216 13:17:25.641619 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.641790 kubelet[2710]: W1216 13:17:25.641633 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.641790 kubelet[2710]: E1216 13:17:25.641665 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.642540 kubelet[2710]: E1216 13:17:25.642511 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.642540 kubelet[2710]: W1216 13:17:25.642530 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.642540 kubelet[2710]: E1216 13:17:25.642541 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.642650 kubelet[2710]: I1216 13:17:25.642610 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2nr5\" (UniqueName: \"kubernetes.io/projected/5fcd65a8-90ec-479e-a0e4-707e3c32e3f8-kube-api-access-c2nr5\") pod \"csi-node-driver-dfzr8\" (UID: \"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8\") " pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:25.643214 kubelet[2710]: E1216 13:17:25.643186 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.643214 kubelet[2710]: W1216 13:17:25.643203 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.643214 kubelet[2710]: E1216 13:17:25.643213 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.643877 kubelet[2710]: E1216 13:17:25.643602 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.643877 kubelet[2710]: W1216 13:17:25.643613 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.643877 kubelet[2710]: E1216 13:17:25.643621 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.694461 kubelet[2710]: E1216 13:17:25.694400 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:25.698585 containerd[1552]: time="2025-12-16T13:17:25.697666694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sfwkq,Uid:00326b13-5e1c-47d9-b97c-f557ddabda7d,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:25.718364 containerd[1552]: time="2025-12-16T13:17:25.718292317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f56c6b68f-xbqth,Uid:f969c387-d3c0-42a3-bdc3-bbd1a4972c63,Namespace:calico-system,Attempt:0,} returns sandbox id \"143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0\"" Dec 16 13:17:25.720598 kubelet[2710]: E1216 13:17:25.720453 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:25.721980 containerd[1552]: time="2025-12-16T13:17:25.721941569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:17:25.730607 containerd[1552]: time="2025-12-16T13:17:25.728607438Z" level=info msg="connecting to shim 26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3" address="unix:///run/containerd/s/e7f319e095d76e33e78108874f81811204efd05d9f0029e3ac16a9c5c5ba939a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:25.745034 kubelet[2710]: E1216 13:17:25.744512 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.745034 kubelet[2710]: W1216 13:17:25.744530 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.745034 kubelet[2710]: E1216 13:17:25.744548 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.745034 kubelet[2710]: E1216 13:17:25.745007 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.745034 kubelet[2710]: W1216 13:17:25.745017 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.745265 kubelet[2710]: E1216 13:17:25.745211 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.745499 kubelet[2710]: E1216 13:17:25.745488 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.745582 kubelet[2710]: W1216 13:17:25.745550 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.745649 kubelet[2710]: E1216 13:17:25.745638 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.745914 kubelet[2710]: E1216 13:17:25.745904 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.746204 kubelet[2710]: W1216 13:17:25.746192 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.746255 kubelet[2710]: E1216 13:17:25.746245 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.746487 kubelet[2710]: E1216 13:17:25.746477 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.746624 kubelet[2710]: W1216 13:17:25.746610 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.746692 kubelet[2710]: E1216 13:17:25.746682 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.746947 kubelet[2710]: E1216 13:17:25.746936 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.747004 kubelet[2710]: W1216 13:17:25.746994 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.747060 kubelet[2710]: E1216 13:17:25.747050 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.747344 kubelet[2710]: E1216 13:17:25.747334 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.747452 kubelet[2710]: W1216 13:17:25.747393 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.747513 kubelet[2710]: E1216 13:17:25.747502 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.747783 kubelet[2710]: E1216 13:17:25.747761 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.747783 kubelet[2710]: W1216 13:17:25.747770 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.747899 kubelet[2710]: E1216 13:17:25.747888 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.748106 kubelet[2710]: E1216 13:17:25.748083 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.748106 kubelet[2710]: W1216 13:17:25.748094 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.748263 kubelet[2710]: E1216 13:17:25.748245 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.748431 kubelet[2710]: E1216 13:17:25.748408 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.748431 kubelet[2710]: W1216 13:17:25.748418 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.748593 kubelet[2710]: E1216 13:17:25.748575 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.748780 kubelet[2710]: E1216 13:17:25.748757 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.748780 kubelet[2710]: W1216 13:17:25.748767 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.748932 kubelet[2710]: E1216 13:17:25.748912 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.749106 kubelet[2710]: E1216 13:17:25.749084 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.749106 kubelet[2710]: W1216 13:17:25.749093 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.749251 kubelet[2710]: E1216 13:17:25.749233 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.749427 kubelet[2710]: E1216 13:17:25.749404 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.749427 kubelet[2710]: W1216 13:17:25.749414 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.749785 kubelet[2710]: E1216 13:17:25.749766 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.750222 kubelet[2710]: E1216 13:17:25.750049 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.750222 kubelet[2710]: W1216 13:17:25.750058 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.750333 kubelet[2710]: E1216 13:17:25.750282 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.750389 kubelet[2710]: E1216 13:17:25.750380 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.750480 kubelet[2710]: W1216 13:17:25.750435 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.750554 kubelet[2710]: E1216 13:17:25.750521 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.750801 kubelet[2710]: E1216 13:17:25.750779 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.750801 kubelet[2710]: W1216 13:17:25.750788 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.751007 kubelet[2710]: E1216 13:17:25.750987 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.751112 kubelet[2710]: E1216 13:17:25.751091 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.751112 kubelet[2710]: W1216 13:17:25.751100 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.751346 kubelet[2710]: E1216 13:17:25.751327 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.751442 kubelet[2710]: E1216 13:17:25.751422 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.751442 kubelet[2710]: W1216 13:17:25.751430 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.751644 kubelet[2710]: E1216 13:17:25.751571 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.751825 kubelet[2710]: E1216 13:17:25.751816 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.752213 kubelet[2710]: W1216 13:17:25.751875 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.752267 kubelet[2710]: E1216 13:17:25.752256 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.752641 kubelet[2710]: E1216 13:17:25.752630 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.752696 kubelet[2710]: W1216 13:17:25.752686 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.752810 kubelet[2710]: E1216 13:17:25.752799 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.753140 kubelet[2710]: E1216 13:17:25.753130 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.753211 kubelet[2710]: W1216 13:17:25.753200 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.753479 kubelet[2710]: E1216 13:17:25.753393 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.753603 kubelet[2710]: E1216 13:17:25.753552 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.753666 kubelet[2710]: W1216 13:17:25.753654 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.753799 kubelet[2710]: E1216 13:17:25.753777 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.754008 kubelet[2710]: E1216 13:17:25.753985 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.754008 kubelet[2710]: W1216 13:17:25.753996 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.754259 kubelet[2710]: E1216 13:17:25.754235 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.754366 kubelet[2710]: E1216 13:17:25.754344 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.754366 kubelet[2710]: W1216 13:17:25.754354 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.754488 kubelet[2710]: E1216 13:17:25.754430 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.754712 kubelet[2710]: E1216 13:17:25.754701 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.754997 kubelet[2710]: W1216 13:17:25.754766 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.754997 kubelet[2710]: E1216 13:17:25.754780 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.772525 kubelet[2710]: E1216 13:17:25.772509 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:25.772653 kubelet[2710]: W1216 13:17:25.772634 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:25.772797 kubelet[2710]: E1216 13:17:25.772778 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:25.774719 systemd[1]: Started cri-containerd-26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3.scope - libcontainer container 26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3. Dec 16 13:17:25.810351 containerd[1552]: time="2025-12-16T13:17:25.810225106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sfwkq,Uid:00326b13-5e1c-47d9-b97c-f557ddabda7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\"" Dec 16 13:17:25.811188 kubelet[2710]: E1216 13:17:25.811160 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:26.682356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610283130.mount: Deactivated successfully. Dec 16 13:17:27.179541 containerd[1552]: time="2025-12-16T13:17:27.179497469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.180432 containerd[1552]: time="2025-12-16T13:17:27.180289784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:17:27.180930 containerd[1552]: time="2025-12-16T13:17:27.180901040Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.182582 containerd[1552]: time="2025-12-16T13:17:27.182529719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.183183 containerd[1552]: time="2025-12-16T13:17:27.183134875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.461132906s" Dec 16 13:17:27.183288 containerd[1552]: time="2025-12-16T13:17:27.183266694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:17:27.185132 containerd[1552]: time="2025-12-16T13:17:27.184922183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:17:27.200236 containerd[1552]: time="2025-12-16T13:17:27.198210744Z" level=info msg="CreateContainer within sandbox \"143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:17:27.206332 containerd[1552]: time="2025-12-16T13:17:27.206290190Z" level=info msg="Container 4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:27.211145 containerd[1552]: time="2025-12-16T13:17:27.211108897Z" level=info msg="CreateContainer within sandbox \"143d420fd951a51f5aa7353e6c2a347025460a4a93478a6ee64b5dace707f9d0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592\"" Dec 16 13:17:27.211829 containerd[1552]: time="2025-12-16T13:17:27.211806673Z" level=info msg="StartContainer for \"4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592\"" Dec 16 13:17:27.216391 containerd[1552]: time="2025-12-16T13:17:27.216330982Z" level=info msg="connecting to shim 4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592" address="unix:///run/containerd/s/04b16bc63e1837e77115206fa4687afbb019d58ca9fa0115f560d0c14aeb19ac" protocol=ttrpc version=3 Dec 16 13:17:27.236706 systemd[1]: Started cri-containerd-4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592.scope - libcontainer container 4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592. Dec 16 13:17:27.306050 containerd[1552]: time="2025-12-16T13:17:27.305954812Z" level=info msg="StartContainer for \"4dc7f63f1317c6478519e5a54dcea27cef6c7263f2b3891203db90e0e4a28592\" returns successfully" Dec 16 13:17:27.418972 kubelet[2710]: E1216 13:17:27.418927 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:27.517089 kubelet[2710]: E1216 13:17:27.516978 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:27.534261 kubelet[2710]: E1216 13:17:27.534223 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.534261 kubelet[2710]: W1216 13:17:27.534246 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.534261 kubelet[2710]: E1216 13:17:27.534263 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.534508 kubelet[2710]: E1216 13:17:27.534468 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.534508 kubelet[2710]: W1216 13:17:27.534481 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.534508 kubelet[2710]: E1216 13:17:27.534509 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.534839 kubelet[2710]: E1216 13:17:27.534724 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.534839 kubelet[2710]: W1216 13:17:27.534740 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.534839 kubelet[2710]: E1216 13:17:27.534748 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.535019 kubelet[2710]: E1216 13:17:27.535000 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.535019 kubelet[2710]: W1216 13:17:27.535014 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.535078 kubelet[2710]: E1216 13:17:27.535023 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.535272 kubelet[2710]: E1216 13:17:27.535244 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.535272 kubelet[2710]: W1216 13:17:27.535258 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.535362 kubelet[2710]: E1216 13:17:27.535286 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.535524 kubelet[2710]: E1216 13:17:27.535490 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.535524 kubelet[2710]: W1216 13:17:27.535503 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.535529 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.535724 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.536637 kubelet[2710]: W1216 13:17:27.535732 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.535739 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.535927 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.536637 kubelet[2710]: W1216 13:17:27.535934 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.535942 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.536194 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.536637 kubelet[2710]: W1216 13:17:27.536202 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.536637 kubelet[2710]: E1216 13:17:27.536210 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.536845 kubelet[2710]: E1216 13:17:27.536732 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.536845 kubelet[2710]: W1216 13:17:27.536740 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.536845 kubelet[2710]: E1216 13:17:27.536747 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.536969 kubelet[2710]: E1216 13:17:27.536952 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.536969 kubelet[2710]: W1216 13:17:27.536966 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.537025 kubelet[2710]: E1216 13:17:27.536974 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.537240 kubelet[2710]: E1216 13:17:27.537221 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.537240 kubelet[2710]: W1216 13:17:27.537235 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.537288 kubelet[2710]: E1216 13:17:27.537243 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.537493 kubelet[2710]: E1216 13:17:27.537475 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.537493 kubelet[2710]: W1216 13:17:27.537489 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.538631 kubelet[2710]: E1216 13:17:27.538605 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.538897 kubelet[2710]: E1216 13:17:27.538878 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.538897 kubelet[2710]: W1216 13:17:27.538893 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.538965 kubelet[2710]: E1216 13:17:27.538902 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.539104 kubelet[2710]: E1216 13:17:27.539086 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.539104 kubelet[2710]: W1216 13:17:27.539099 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.539179 kubelet[2710]: E1216 13:17:27.539107 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.560314 kubelet[2710]: E1216 13:17:27.560287 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.560388 kubelet[2710]: W1216 13:17:27.560324 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.560388 kubelet[2710]: E1216 13:17:27.560340 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.560661 kubelet[2710]: E1216 13:17:27.560617 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.560661 kubelet[2710]: W1216 13:17:27.560630 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.560746 kubelet[2710]: E1216 13:17:27.560684 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.560998 kubelet[2710]: E1216 13:17:27.560968 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.560998 kubelet[2710]: W1216 13:17:27.560983 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.561137 kubelet[2710]: E1216 13:17:27.561026 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.561491 kubelet[2710]: E1216 13:17:27.561254 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.561491 kubelet[2710]: W1216 13:17:27.561270 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.561491 kubelet[2710]: E1216 13:17:27.561278 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.561491 kubelet[2710]: E1216 13:17:27.561490 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.561688 kubelet[2710]: W1216 13:17:27.561498 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.561688 kubelet[2710]: E1216 13:17:27.561534 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.562203 kubelet[2710]: E1216 13:17:27.561751 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.562203 kubelet[2710]: W1216 13:17:27.561762 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.562203 kubelet[2710]: E1216 13:17:27.561897 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.562203 kubelet[2710]: E1216 13:17:27.561954 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.562203 kubelet[2710]: W1216 13:17:27.561960 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.562203 kubelet[2710]: E1216 13:17:27.562011 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.562366 kubelet[2710]: E1216 13:17:27.562240 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.562366 kubelet[2710]: W1216 13:17:27.562248 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.562366 kubelet[2710]: E1216 13:17:27.562356 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.562576 kubelet[2710]: E1216 13:17:27.562534 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.562576 kubelet[2710]: W1216 13:17:27.562548 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.562641 kubelet[2710]: E1216 13:17:27.562578 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.562957 kubelet[2710]: E1216 13:17:27.562931 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.563708 kubelet[2710]: W1216 13:17:27.563663 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.563833 kubelet[2710]: E1216 13:17:27.563789 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.563933 kubelet[2710]: E1216 13:17:27.563914 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.563933 kubelet[2710]: W1216 13:17:27.563927 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.564020 kubelet[2710]: E1216 13:17:27.564008 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.564190 kubelet[2710]: E1216 13:17:27.564150 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.564190 kubelet[2710]: W1216 13:17:27.564176 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.564338 kubelet[2710]: E1216 13:17:27.564198 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.564425 kubelet[2710]: E1216 13:17:27.564391 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.564425 kubelet[2710]: W1216 13:17:27.564405 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.564425 kubelet[2710]: E1216 13:17:27.564421 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.564730 kubelet[2710]: E1216 13:17:27.564685 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.564730 kubelet[2710]: W1216 13:17:27.564727 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.564798 kubelet[2710]: E1216 13:17:27.564759 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.565656 kubelet[2710]: E1216 13:17:27.565631 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.565656 kubelet[2710]: W1216 13:17:27.565649 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.565749 kubelet[2710]: E1216 13:17:27.565663 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.565849 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.566600 kubelet[2710]: W1216 13:17:27.565861 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.565869 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.566156 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.566600 kubelet[2710]: W1216 13:17:27.566164 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.566172 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.566337 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:17:27.566600 kubelet[2710]: W1216 13:17:27.566344 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:17:27.566600 kubelet[2710]: E1216 13:17:27.566352 2710 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:17:27.579154 kubelet[2710]: I1216 13:17:27.579089 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f56c6b68f-xbqth" podStartSLOduration=1.11578783 podStartE2EDuration="2.578978462s" podCreationTimestamp="2025-12-16 13:17:25 +0000 UTC" firstStartedPulling="2025-12-16 13:17:25.721614831 +0000 UTC m=+20.397902534" lastFinishedPulling="2025-12-16 13:17:27.184805463 +0000 UTC m=+21.861093166" observedRunningTime="2025-12-16 13:17:27.578350097 +0000 UTC m=+22.254637800" watchObservedRunningTime="2025-12-16 13:17:27.578978462 +0000 UTC m=+22.255266165" Dec 16 13:17:27.791780 containerd[1552]: time="2025-12-16T13:17:27.791224630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.792119 containerd[1552]: time="2025-12-16T13:17:27.792008885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:17:27.793388 containerd[1552]: time="2025-12-16T13:17:27.793128448Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.794998 containerd[1552]: time="2025-12-16T13:17:27.794936365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:27.795556 containerd[1552]: time="2025-12-16T13:17:27.795521682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 610.575199ms" Dec 16 13:17:27.795556 containerd[1552]: time="2025-12-16T13:17:27.795576621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:17:27.798862 containerd[1552]: time="2025-12-16T13:17:27.798838649Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:17:27.806726 containerd[1552]: time="2025-12-16T13:17:27.806703497Z" level=info msg="Container 94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:27.810475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219466370.mount: Deactivated successfully. Dec 16 13:17:27.824180 containerd[1552]: time="2025-12-16T13:17:27.824136670Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e\"" Dec 16 13:17:27.824864 containerd[1552]: time="2025-12-16T13:17:27.824823465Z" level=info msg="StartContainer for \"94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e\"" Dec 16 13:17:27.826491 containerd[1552]: time="2025-12-16T13:17:27.826455104Z" level=info msg="connecting to shim 94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e" address="unix:///run/containerd/s/e7f319e095d76e33e78108874f81811204efd05d9f0029e3ac16a9c5c5ba939a" protocol=ttrpc version=3 Dec 16 13:17:27.849747 systemd[1]: Started cri-containerd-94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e.scope - libcontainer container 94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e. Dec 16 13:17:27.913522 containerd[1552]: time="2025-12-16T13:17:27.913103044Z" level=info msg="StartContainer for \"94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e\" returns successfully" Dec 16 13:17:27.930720 systemd[1]: cri-containerd-94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e.scope: Deactivated successfully. Dec 16 13:17:27.935835 containerd[1552]: time="2025-12-16T13:17:27.935793842Z" level=info msg="received container exit event container_id:\"94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e\" id:\"94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e\" pid:3400 exited_at:{seconds:1765891047 nanos:935169546}" Dec 16 13:17:27.965412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94165383a7af8c89448904b03c13c2021d02063dc547676f979333357d9b661e-rootfs.mount: Deactivated successfully. Dec 16 13:17:28.519918 kubelet[2710]: I1216 13:17:28.519864 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:17:28.521230 kubelet[2710]: E1216 13:17:28.521205 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:28.521337 kubelet[2710]: E1216 13:17:28.521321 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:28.522269 containerd[1552]: time="2025-12-16T13:17:28.522034312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:17:29.422148 kubelet[2710]: E1216 13:17:29.422114 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:30.211651 containerd[1552]: time="2025-12-16T13:17:30.211611582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:30.212386 containerd[1552]: time="2025-12-16T13:17:30.212359058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:17:30.212730 containerd[1552]: time="2025-12-16T13:17:30.212706226Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:30.214609 containerd[1552]: time="2025-12-16T13:17:30.214584555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:30.215157 containerd[1552]: time="2025-12-16T13:17:30.215136422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.69307068s" Dec 16 13:17:30.215236 containerd[1552]: time="2025-12-16T13:17:30.215215692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:17:30.218119 containerd[1552]: time="2025-12-16T13:17:30.217813897Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:17:30.224780 containerd[1552]: time="2025-12-16T13:17:30.224758879Z" level=info msg="Container c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:30.231133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818095464.mount: Deactivated successfully. Dec 16 13:17:30.234334 containerd[1552]: time="2025-12-16T13:17:30.234293596Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e\"" Dec 16 13:17:30.234751 containerd[1552]: time="2025-12-16T13:17:30.234726504Z" level=info msg="StartContainer for \"c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e\"" Dec 16 13:17:30.237098 containerd[1552]: time="2025-12-16T13:17:30.237018461Z" level=info msg="connecting to shim c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e" address="unix:///run/containerd/s/e7f319e095d76e33e78108874f81811204efd05d9f0029e3ac16a9c5c5ba939a" protocol=ttrpc version=3 Dec 16 13:17:30.265688 systemd[1]: Started cri-containerd-c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e.scope - libcontainer container c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e. Dec 16 13:17:30.349373 containerd[1552]: time="2025-12-16T13:17:30.349344781Z" level=info msg="StartContainer for \"c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e\" returns successfully" Dec 16 13:17:30.527700 kubelet[2710]: E1216 13:17:30.527258 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:30.808240 systemd[1]: cri-containerd-c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e.scope: Deactivated successfully. Dec 16 13:17:30.808657 containerd[1552]: time="2025-12-16T13:17:30.808415507Z" level=info msg="received container exit event container_id:\"c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e\" id:\"c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e\" pid:3456 exited_at:{seconds:1765891050 nanos:808036409}" Dec 16 13:17:30.808539 systemd[1]: cri-containerd-c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e.scope: Consumed 482ms CPU time, 202.9M memory peak, 171.3M written to disk. Dec 16 13:17:30.828530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c376ec866ce8bb8edcbfccfbcf9d1bf59f6860e1e0623f8ed5aabcfd417b883e-rootfs.mount: Deactivated successfully. Dec 16 13:17:30.887306 kubelet[2710]: I1216 13:17:30.884986 2710 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:17:30.921457 systemd[1]: Created slice kubepods-besteffort-podfdeecd87_a4fb_4f16_a917_506e0f06769a.slice - libcontainer container kubepods-besteffort-podfdeecd87_a4fb_4f16_a917_506e0f06769a.slice. Dec 16 13:17:30.935599 systemd[1]: Created slice kubepods-besteffort-pod12ede457_05ac_48b3_a0cb_fee957a57d7a.slice - libcontainer container kubepods-besteffort-pod12ede457_05ac_48b3_a0cb_fee957a57d7a.slice. Dec 16 13:17:30.944316 systemd[1]: Created slice kubepods-besteffort-podec01f64e_62ff_448c_858d_eb1dc0f9f12f.slice - libcontainer container kubepods-besteffort-podec01f64e_62ff_448c_858d_eb1dc0f9f12f.slice. Dec 16 13:17:30.953611 systemd[1]: Created slice kubepods-burstable-pode3ebb4c2_f22e_4173_95bd_50b79113c15a.slice - libcontainer container kubepods-burstable-pode3ebb4c2_f22e_4173_95bd_50b79113c15a.slice. Dec 16 13:17:30.962986 systemd[1]: Created slice kubepods-burstable-pod032417a5_379f_4884_98d0_f7fd27faf6c6.slice - libcontainer container kubepods-burstable-pod032417a5_379f_4884_98d0_f7fd27faf6c6.slice. Dec 16 13:17:30.972846 systemd[1]: Created slice kubepods-besteffort-podece17f0c_d3a3_4fcd_aa90_acc8b03f2f59.slice - libcontainer container kubepods-besteffort-podece17f0c_d3a3_4fcd_aa90_acc8b03f2f59.slice. Dec 16 13:17:30.981173 systemd[1]: Created slice kubepods-besteffort-pod43dfa291_6618_4cc2_b9da_24c903da3b7c.slice - libcontainer container kubepods-besteffort-pod43dfa291_6618_4cc2_b9da_24c903da3b7c.slice. Dec 16 13:17:30.983606 kubelet[2710]: I1216 13:17:30.983472 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwn7\" (UniqueName: \"kubernetes.io/projected/e3ebb4c2-f22e-4173-95bd-50b79113c15a-kube-api-access-dwwn7\") pod \"coredns-668d6bf9bc-tq7np\" (UID: \"e3ebb4c2-f22e-4173-95bd-50b79113c15a\") " pod="kube-system/coredns-668d6bf9bc-tq7np" Dec 16 13:17:30.984606 kubelet[2710]: I1216 13:17:30.983690 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43dfa291-6618-4cc2-b9da-24c903da3b7c-tigera-ca-bundle\") pod \"calico-kube-controllers-5f4767f495-r8f85\" (UID: \"43dfa291-6618-4cc2-b9da-24c903da3b7c\") " pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" Dec 16 13:17:30.984606 kubelet[2710]: I1216 13:17:30.983742 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvg6\" (UniqueName: \"kubernetes.io/projected/43dfa291-6618-4cc2-b9da-24c903da3b7c-kube-api-access-7lvg6\") pod \"calico-kube-controllers-5f4767f495-r8f85\" (UID: \"43dfa291-6618-4cc2-b9da-24c903da3b7c\") " pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" Dec 16 13:17:30.984606 kubelet[2710]: I1216 13:17:30.983762 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/032417a5-379f-4884-98d0-f7fd27faf6c6-config-volume\") pod \"coredns-668d6bf9bc-jmjwt\" (UID: \"032417a5-379f-4884-98d0-f7fd27faf6c6\") " pod="kube-system/coredns-668d6bf9bc-jmjwt" Dec 16 13:17:30.984606 kubelet[2710]: I1216 13:17:30.983779 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59-calico-apiserver-certs\") pod \"calico-apiserver-6c5f65448b-76gbh\" (UID: \"ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59\") " pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" Dec 16 13:17:30.984606 kubelet[2710]: I1216 13:17:30.983793 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ec01f64e-62ff-448c-858d-eb1dc0f9f12f-goldmane-key-pair\") pod \"goldmane-666569f655-4xlch\" (UID: \"ec01f64e-62ff-448c-858d-eb1dc0f9f12f\") " pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:30.984763 kubelet[2710]: I1216 13:17:30.983829 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec01f64e-62ff-448c-858d-eb1dc0f9f12f-config\") pod \"goldmane-666569f655-4xlch\" (UID: \"ec01f64e-62ff-448c-858d-eb1dc0f9f12f\") " pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:30.984763 kubelet[2710]: I1216 13:17:30.983851 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpp8t\" (UniqueName: \"kubernetes.io/projected/ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59-kube-api-access-tpp8t\") pod \"calico-apiserver-6c5f65448b-76gbh\" (UID: \"ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59\") " pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" Dec 16 13:17:30.984763 kubelet[2710]: I1216 13:17:30.983867 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwlj\" (UniqueName: \"kubernetes.io/projected/ec01f64e-62ff-448c-858d-eb1dc0f9f12f-kube-api-access-klwlj\") pod \"goldmane-666569f655-4xlch\" (UID: \"ec01f64e-62ff-448c-858d-eb1dc0f9f12f\") " pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:30.984763 kubelet[2710]: I1216 13:17:30.983881 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xc6\" (UniqueName: \"kubernetes.io/projected/12ede457-05ac-48b3-a0cb-fee957a57d7a-kube-api-access-95xc6\") pod \"calico-apiserver-6c5f65448b-tfn5z\" (UID: \"12ede457-05ac-48b3-a0cb-fee957a57d7a\") " pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" Dec 16 13:17:30.984763 kubelet[2710]: I1216 13:17:30.983914 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3ebb4c2-f22e-4173-95bd-50b79113c15a-config-volume\") pod \"coredns-668d6bf9bc-tq7np\" (UID: \"e3ebb4c2-f22e-4173-95bd-50b79113c15a\") " pod="kube-system/coredns-668d6bf9bc-tq7np" Dec 16 13:17:30.984878 kubelet[2710]: I1216 13:17:30.983928 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-ca-bundle\") pod \"whisker-66b448c5d6-gv4dp\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " pod="calico-system/whisker-66b448c5d6-gv4dp" Dec 16 13:17:30.984878 kubelet[2710]: I1216 13:17:30.983946 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec01f64e-62ff-448c-858d-eb1dc0f9f12f-goldmane-ca-bundle\") pod \"goldmane-666569f655-4xlch\" (UID: \"ec01f64e-62ff-448c-858d-eb1dc0f9f12f\") " pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:30.984878 kubelet[2710]: I1216 13:17:30.983979 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12ede457-05ac-48b3-a0cb-fee957a57d7a-calico-apiserver-certs\") pod \"calico-apiserver-6c5f65448b-tfn5z\" (UID: \"12ede457-05ac-48b3-a0cb-fee957a57d7a\") " pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" Dec 16 13:17:30.984878 kubelet[2710]: I1216 13:17:30.983994 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-backend-key-pair\") pod \"whisker-66b448c5d6-gv4dp\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " pod="calico-system/whisker-66b448c5d6-gv4dp" Dec 16 13:17:30.984878 kubelet[2710]: I1216 13:17:30.984010 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l96kr\" (UniqueName: \"kubernetes.io/projected/032417a5-379f-4884-98d0-f7fd27faf6c6-kube-api-access-l96kr\") pod \"coredns-668d6bf9bc-jmjwt\" (UID: \"032417a5-379f-4884-98d0-f7fd27faf6c6\") " pod="kube-system/coredns-668d6bf9bc-jmjwt" Dec 16 13:17:30.984987 kubelet[2710]: I1216 13:17:30.984029 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv686\" (UniqueName: \"kubernetes.io/projected/fdeecd87-a4fb-4f16-a917-506e0f06769a-kube-api-access-hv686\") pod \"whisker-66b448c5d6-gv4dp\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " pod="calico-system/whisker-66b448c5d6-gv4dp" Dec 16 13:17:31.227815 containerd[1552]: time="2025-12-16T13:17:31.227782620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b448c5d6-gv4dp,Uid:fdeecd87-a4fb-4f16-a917-506e0f06769a,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:31.245652 containerd[1552]: time="2025-12-16T13:17:31.245614447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-tfn5z,Uid:12ede457-05ac-48b3-a0cb-fee957a57d7a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:17:31.252656 containerd[1552]: time="2025-12-16T13:17:31.250462222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4xlch,Uid:ec01f64e-62ff-448c-858d-eb1dc0f9f12f,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:31.259745 kubelet[2710]: E1216 13:17:31.259366 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:31.260740 containerd[1552]: time="2025-12-16T13:17:31.260681929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq7np,Uid:e3ebb4c2-f22e-4173-95bd-50b79113c15a,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:31.268771 kubelet[2710]: E1216 13:17:31.268736 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:31.277938 containerd[1552]: time="2025-12-16T13:17:31.277910310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jmjwt,Uid:032417a5-379f-4884-98d0-f7fd27faf6c6,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:31.278467 containerd[1552]: time="2025-12-16T13:17:31.278425497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-76gbh,Uid:ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:17:31.287940 containerd[1552]: time="2025-12-16T13:17:31.285939479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4767f495-r8f85,Uid:43dfa291-6618-4cc2-b9da-24c903da3b7c,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:31.412108 containerd[1552]: time="2025-12-16T13:17:31.412037266Z" level=error msg="Failed to destroy network for sandbox \"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.414709 containerd[1552]: time="2025-12-16T13:17:31.414671162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b448c5d6-gv4dp,Uid:fdeecd87-a4fb-4f16-a917-506e0f06769a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.415214 kubelet[2710]: E1216 13:17:31.415172 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.415847 kubelet[2710]: E1216 13:17:31.415265 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b448c5d6-gv4dp" Dec 16 13:17:31.415847 kubelet[2710]: E1216 13:17:31.415315 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66b448c5d6-gv4dp" Dec 16 13:17:31.415847 kubelet[2710]: E1216 13:17:31.415357 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66b448c5d6-gv4dp_calico-system(fdeecd87-a4fb-4f16-a917-506e0f06769a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66b448c5d6-gv4dp_calico-system(fdeecd87-a4fb-4f16-a917-506e0f06769a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feb3eaa53fca8ac72c9af4f40ec0a2f67b3c4d216b35d880aa65780d4eed6fad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66b448c5d6-gv4dp" podUID="fdeecd87-a4fb-4f16-a917-506e0f06769a" Dec 16 13:17:31.429276 containerd[1552]: time="2025-12-16T13:17:31.429224037Z" level=error msg="Failed to destroy network for sandbox \"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.433244 systemd[1]: Created slice kubepods-besteffort-pod5fcd65a8_90ec_479e_a0e4_707e3c32e3f8.slice - libcontainer container kubepods-besteffort-pod5fcd65a8_90ec_479e_a0e4_707e3c32e3f8.slice. Dec 16 13:17:31.445012 containerd[1552]: time="2025-12-16T13:17:31.444652277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfzr8,Uid:5fcd65a8-90ec-479e-a0e4-707e3c32e3f8,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:31.445715 containerd[1552]: time="2025-12-16T13:17:31.445676272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jmjwt,Uid:032417a5-379f-4884-98d0-f7fd27faf6c6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.449537 kubelet[2710]: E1216 13:17:31.449367 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.449613 kubelet[2710]: E1216 13:17:31.449545 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jmjwt" Dec 16 13:17:31.449783 kubelet[2710]: E1216 13:17:31.449754 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jmjwt" Dec 16 13:17:31.450662 kubelet[2710]: E1216 13:17:31.449802 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jmjwt_kube-system(032417a5-379f-4884-98d0-f7fd27faf6c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jmjwt_kube-system(032417a5-379f-4884-98d0-f7fd27faf6c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff85773704975c356a53132375c9f6a17cb0d7a9347569cfb6df2a3ae87491ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jmjwt" podUID="032417a5-379f-4884-98d0-f7fd27faf6c6" Dec 16 13:17:31.510619 containerd[1552]: time="2025-12-16T13:17:31.508760795Z" level=error msg="Failed to destroy network for sandbox \"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.510619 containerd[1552]: time="2025-12-16T13:17:31.510364987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4xlch,Uid:ec01f64e-62ff-448c-858d-eb1dc0f9f12f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.510768 kubelet[2710]: E1216 13:17:31.510691 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.510816 kubelet[2710]: E1216 13:17:31.510770 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:31.510816 kubelet[2710]: E1216 13:17:31.510792 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4xlch" Dec 16 13:17:31.510897 kubelet[2710]: E1216 13:17:31.510855 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1baf82126c2933c08f5370c845ecfa254afefa0b3cbc941ce731286c2f9174b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:17:31.513720 containerd[1552]: time="2025-12-16T13:17:31.513685330Z" level=error msg="Failed to destroy network for sandbox \"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.516079 containerd[1552]: time="2025-12-16T13:17:31.516040458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-76gbh,Uid:ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.516357 kubelet[2710]: E1216 13:17:31.516330 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.516545 kubelet[2710]: E1216 13:17:31.516440 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" Dec 16 13:17:31.516545 kubelet[2710]: E1216 13:17:31.516464 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" Dec 16 13:17:31.516545 kubelet[2710]: E1216 13:17:31.516511 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"549a2d799bb8960b875025e18610c6e595d6e0fa8485328a421fe690f1f78e35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:17:31.517533 containerd[1552]: time="2025-12-16T13:17:31.517514440Z" level=error msg="Failed to destroy network for sandbox \"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.518345 containerd[1552]: time="2025-12-16T13:17:31.518322856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-tfn5z,Uid:12ede457-05ac-48b3-a0cb-fee957a57d7a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.518673 kubelet[2710]: E1216 13:17:31.518491 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.518673 kubelet[2710]: E1216 13:17:31.518527 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" Dec 16 13:17:31.518673 kubelet[2710]: E1216 13:17:31.518541 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" Dec 16 13:17:31.519920 kubelet[2710]: E1216 13:17:31.519497 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ae76ee2b6c306ab978fb1695d8f8f5548cea4cdf7f465d6909113196da0dd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:17:31.522376 containerd[1552]: time="2025-12-16T13:17:31.522337845Z" level=error msg="Failed to destroy network for sandbox \"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.523949 containerd[1552]: time="2025-12-16T13:17:31.523646348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4767f495-r8f85,Uid:43dfa291-6618-4cc2-b9da-24c903da3b7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.524111 kubelet[2710]: E1216 13:17:31.524072 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.524155 kubelet[2710]: E1216 13:17:31.524116 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" Dec 16 13:17:31.524155 kubelet[2710]: E1216 13:17:31.524134 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" Dec 16 13:17:31.524203 kubelet[2710]: E1216 13:17:31.524168 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f85d2e8477faad1f92104f6ac75ba8e6d8f34e93858848bc0c3932bcedb35091\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:17:31.536055 kubelet[2710]: E1216 13:17:31.535918 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:31.541123 containerd[1552]: time="2025-12-16T13:17:31.540211422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:17:31.541732 containerd[1552]: time="2025-12-16T13:17:31.541639715Z" level=error msg="Failed to destroy network for sandbox \"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.543335 containerd[1552]: time="2025-12-16T13:17:31.543296737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq7np,Uid:e3ebb4c2-f22e-4173-95bd-50b79113c15a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.543711 kubelet[2710]: E1216 13:17:31.543546 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.543711 kubelet[2710]: E1216 13:17:31.543629 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tq7np" Dec 16 13:17:31.543711 kubelet[2710]: E1216 13:17:31.543656 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tq7np" Dec 16 13:17:31.544089 kubelet[2710]: E1216 13:17:31.543925 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tq7np_kube-system(e3ebb4c2-f22e-4173-95bd-50b79113c15a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tq7np_kube-system(e3ebb4c2-f22e-4173-95bd-50b79113c15a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb150615c9113e951adb123e928e1040b72c439ca034b5b978c27d69b521c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tq7np" podUID="e3ebb4c2-f22e-4173-95bd-50b79113c15a" Dec 16 13:17:31.580124 containerd[1552]: time="2025-12-16T13:17:31.579968757Z" level=error msg="Failed to destroy network for sandbox \"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.581084 containerd[1552]: time="2025-12-16T13:17:31.581031591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfzr8,Uid:5fcd65a8-90ec-479e-a0e4-707e3c32e3f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.581346 kubelet[2710]: E1216 13:17:31.581214 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:17:31.581346 kubelet[2710]: E1216 13:17:31.581282 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:31.581346 kubelet[2710]: E1216 13:17:31.581301 2710 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dfzr8" Dec 16 13:17:31.581645 kubelet[2710]: E1216 13:17:31.581337 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30882a040bcfd775f43409d3115d993bc32ce7db3b2f307d9239dfa6337ff88a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:32.228318 systemd[1]: run-netns-cni\x2df1e4e06b\x2da506\x2d9ceb\x2dc844\x2dbb159f100c97.mount: Deactivated successfully. Dec 16 13:17:32.228421 systemd[1]: run-netns-cni\x2ddef919fc\x2d2b62\x2d4d2b\x2d572b\x2dba929a0fadc6.mount: Deactivated successfully. Dec 16 13:17:32.228488 systemd[1]: run-netns-cni\x2db5029002\x2d695c\x2db052\x2d0974\x2dfbc27eb174b3.mount: Deactivated successfully. Dec 16 13:17:32.228715 systemd[1]: run-netns-cni\x2d270dcbf3\x2d0b9b\x2d5def\x2de10e\x2d2d61f2783efb.mount: Deactivated successfully. Dec 16 13:17:34.992366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699677244.mount: Deactivated successfully. Dec 16 13:17:35.020941 containerd[1552]: time="2025-12-16T13:17:35.020871452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:35.022310 containerd[1552]: time="2025-12-16T13:17:35.021605439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:17:35.022751 containerd[1552]: time="2025-12-16T13:17:35.022689985Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:35.024032 containerd[1552]: time="2025-12-16T13:17:35.023993380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:35.024845 containerd[1552]: time="2025-12-16T13:17:35.024506638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.48344988s" Dec 16 13:17:35.024845 containerd[1552]: time="2025-12-16T13:17:35.024535368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:17:35.043522 containerd[1552]: time="2025-12-16T13:17:35.043490352Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:17:35.057790 containerd[1552]: time="2025-12-16T13:17:35.057739575Z" level=info msg="Container d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:35.065305 containerd[1552]: time="2025-12-16T13:17:35.065264265Z" level=info msg="CreateContainer within sandbox \"26170ca8fa4bae5c9f211f45626b10d1055af119eb2ba464afe7c720ff8e90c3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61\"" Dec 16 13:17:35.065752 containerd[1552]: time="2025-12-16T13:17:35.065730103Z" level=info msg="StartContainer for \"d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61\"" Dec 16 13:17:35.067411 containerd[1552]: time="2025-12-16T13:17:35.067377746Z" level=info msg="connecting to shim d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61" address="unix:///run/containerd/s/e7f319e095d76e33e78108874f81811204efd05d9f0029e3ac16a9c5c5ba939a" protocol=ttrpc version=3 Dec 16 13:17:35.113693 systemd[1]: Started cri-containerd-d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61.scope - libcontainer container d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61. Dec 16 13:17:35.191499 containerd[1552]: time="2025-12-16T13:17:35.191418210Z" level=info msg="StartContainer for \"d9277bc4a39a8c91a5f7ac3bd7e0afe394fdae8d822a4500b7cfffd5e32c1c61\" returns successfully" Dec 16 13:17:35.278859 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:17:35.278986 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:17:35.413591 kubelet[2710]: I1216 13:17:35.413462 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-ca-bundle\") pod \"fdeecd87-a4fb-4f16-a917-506e0f06769a\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " Dec 16 13:17:35.413591 kubelet[2710]: I1216 13:17:35.413497 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-backend-key-pair\") pod \"fdeecd87-a4fb-4f16-a917-506e0f06769a\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " Dec 16 13:17:35.413591 kubelet[2710]: I1216 13:17:35.413531 2710 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv686\" (UniqueName: \"kubernetes.io/projected/fdeecd87-a4fb-4f16-a917-506e0f06769a-kube-api-access-hv686\") pod \"fdeecd87-a4fb-4f16-a917-506e0f06769a\" (UID: \"fdeecd87-a4fb-4f16-a917-506e0f06769a\") " Dec 16 13:17:35.414632 kubelet[2710]: I1216 13:17:35.414401 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fdeecd87-a4fb-4f16-a917-506e0f06769a" (UID: "fdeecd87-a4fb-4f16-a917-506e0f06769a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:17:35.414826 kubelet[2710]: I1216 13:17:35.414811 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-ca-bundle\") on node \"172-232-20-218\" DevicePath \"\"" Dec 16 13:17:35.421903 kubelet[2710]: I1216 13:17:35.421877 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdeecd87-a4fb-4f16-a917-506e0f06769a-kube-api-access-hv686" (OuterVolumeSpecName: "kube-api-access-hv686") pod "fdeecd87-a4fb-4f16-a917-506e0f06769a" (UID: "fdeecd87-a4fb-4f16-a917-506e0f06769a"). InnerVolumeSpecName "kube-api-access-hv686". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:17:35.422125 kubelet[2710]: I1216 13:17:35.422065 2710 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fdeecd87-a4fb-4f16-a917-506e0f06769a" (UID: "fdeecd87-a4fb-4f16-a917-506e0f06769a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:17:35.431437 systemd[1]: Removed slice kubepods-besteffort-podfdeecd87_a4fb_4f16_a917_506e0f06769a.slice - libcontainer container kubepods-besteffort-podfdeecd87_a4fb_4f16_a917_506e0f06769a.slice. Dec 16 13:17:35.515882 kubelet[2710]: I1216 13:17:35.515826 2710 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv686\" (UniqueName: \"kubernetes.io/projected/fdeecd87-a4fb-4f16-a917-506e0f06769a-kube-api-access-hv686\") on node \"172-232-20-218\" DevicePath \"\"" Dec 16 13:17:35.515882 kubelet[2710]: I1216 13:17:35.515851 2710 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fdeecd87-a4fb-4f16-a917-506e0f06769a-whisker-backend-key-pair\") on node \"172-232-20-218\" DevicePath \"\"" Dec 16 13:17:35.556314 kubelet[2710]: E1216 13:17:35.556193 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:35.586478 kubelet[2710]: I1216 13:17:35.586093 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sfwkq" podStartSLOduration=1.374524598 podStartE2EDuration="10.586077553s" podCreationTimestamp="2025-12-16 13:17:25 +0000 UTC" firstStartedPulling="2025-12-16 13:17:25.81363883 +0000 UTC m=+20.489926543" lastFinishedPulling="2025-12-16 13:17:35.025191795 +0000 UTC m=+29.701479498" observedRunningTime="2025-12-16 13:17:35.585436065 +0000 UTC m=+30.261723768" watchObservedRunningTime="2025-12-16 13:17:35.586077553 +0000 UTC m=+30.262365256" Dec 16 13:17:35.646156 systemd[1]: Created slice kubepods-besteffort-pod1fe93d8b_57a3_4524_abb4_58c7f5835720.slice - libcontainer container kubepods-besteffort-pod1fe93d8b_57a3_4524_abb4_58c7f5835720.slice. Dec 16 13:17:35.718618 kubelet[2710]: I1216 13:17:35.718127 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fe93d8b-57a3-4524-abb4-58c7f5835720-whisker-ca-bundle\") pod \"whisker-76bc8cc6dd-vx8fr\" (UID: \"1fe93d8b-57a3-4524-abb4-58c7f5835720\") " pod="calico-system/whisker-76bc8cc6dd-vx8fr" Dec 16 13:17:35.718618 kubelet[2710]: I1216 13:17:35.718180 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1fe93d8b-57a3-4524-abb4-58c7f5835720-whisker-backend-key-pair\") pod \"whisker-76bc8cc6dd-vx8fr\" (UID: \"1fe93d8b-57a3-4524-abb4-58c7f5835720\") " pod="calico-system/whisker-76bc8cc6dd-vx8fr" Dec 16 13:17:35.718618 kubelet[2710]: I1216 13:17:35.718203 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lblq\" (UniqueName: \"kubernetes.io/projected/1fe93d8b-57a3-4524-abb4-58c7f5835720-kube-api-access-9lblq\") pod \"whisker-76bc8cc6dd-vx8fr\" (UID: \"1fe93d8b-57a3-4524-abb4-58c7f5835720\") " pod="calico-system/whisker-76bc8cc6dd-vx8fr" Dec 16 13:17:35.949518 containerd[1552]: time="2025-12-16T13:17:35.949423790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76bc8cc6dd-vx8fr,Uid:1fe93d8b-57a3-4524-abb4-58c7f5835720,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:35.994650 systemd[1]: var-lib-kubelet-pods-fdeecd87\x2da4fb\x2d4f16\x2da917\x2d506e0f06769a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv686.mount: Deactivated successfully. Dec 16 13:17:35.995250 systemd[1]: var-lib-kubelet-pods-fdeecd87\x2da4fb\x2d4f16\x2da917\x2d506e0f06769a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:17:36.089579 systemd-networkd[1439]: cali28d42cdeb6c: Link UP Dec 16 13:17:36.090474 systemd-networkd[1439]: cali28d42cdeb6c: Gained carrier Dec 16 13:17:36.106807 containerd[1552]: 2025-12-16 13:17:35.971 [INFO][3782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:17:36.106807 containerd[1552]: 2025-12-16 13:17:36.019 [INFO][3782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0 whisker-76bc8cc6dd- calico-system 1fe93d8b-57a3-4524-abb4-58c7f5835720 880 0 2025-12-16 13:17:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76bc8cc6dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-20-218 whisker-76bc8cc6dd-vx8fr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28d42cdeb6c [] [] }} ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-" Dec 16 13:17:36.106807 containerd[1552]: 2025-12-16 13:17:36.019 [INFO][3782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.106807 containerd[1552]: 2025-12-16 13:17:36.046 [INFO][3794] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" HandleID="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Workload="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.046 [INFO][3794] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" HandleID="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Workload="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-20-218", "pod":"whisker-76bc8cc6dd-vx8fr", "timestamp":"2025-12-16 13:17:36.046677643 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.046 [INFO][3794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.046 [INFO][3794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.046 [INFO][3794] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.053 [INFO][3794] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" host="172-232-20-218" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.058 [INFO][3794] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.061 [INFO][3794] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.063 [INFO][3794] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.065 [INFO][3794] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:36.107926 containerd[1552]: 2025-12-16 13:17:36.065 [INFO][3794] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" host="172-232-20-218" Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.066 [INFO][3794] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626 Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.071 [INFO][3794] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" host="172-232-20-218" Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.075 [INFO][3794] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.129/26] block=192.168.36.128/26 handle="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" host="172-232-20-218" Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.075 [INFO][3794] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.129/26] handle="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" host="172-232-20-218" Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.075 [INFO][3794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:36.108136 containerd[1552]: 2025-12-16 13:17:36.075 [INFO][3794] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.129/26] IPv6=[] ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" HandleID="k8s-pod-network.7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Workload="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.108248 containerd[1552]: 2025-12-16 13:17:36.079 [INFO][3782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0", GenerateName:"whisker-76bc8cc6dd-", Namespace:"calico-system", SelfLink:"", UID:"1fe93d8b-57a3-4524-abb4-58c7f5835720", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76bc8cc6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"whisker-76bc8cc6dd-vx8fr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28d42cdeb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:36.108248 containerd[1552]: 2025-12-16 13:17:36.079 [INFO][3782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.129/32] ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.108316 containerd[1552]: 2025-12-16 13:17:36.079 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28d42cdeb6c ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.108316 containerd[1552]: 2025-12-16 13:17:36.091 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.108361 containerd[1552]: 2025-12-16 13:17:36.091 [INFO][3782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0", GenerateName:"whisker-76bc8cc6dd-", Namespace:"calico-system", SelfLink:"", UID:"1fe93d8b-57a3-4524-abb4-58c7f5835720", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76bc8cc6dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626", Pod:"whisker-76bc8cc6dd-vx8fr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28d42cdeb6c", MAC:"be:80:cb:3a:90:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:36.108414 containerd[1552]: 2025-12-16 13:17:36.102 [INFO][3782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" Namespace="calico-system" Pod="whisker-76bc8cc6dd-vx8fr" WorkloadEndpoint="172--232--20--218-k8s-whisker--76bc8cc6dd--vx8fr-eth0" Dec 16 13:17:36.147124 containerd[1552]: time="2025-12-16T13:17:36.147027817Z" level=info msg="connecting to shim 7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626" address="unix:///run/containerd/s/efb5078d63e6e5ab08f746942dd574d62ca4924d640808fc7af99ee27550b92d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:36.179683 systemd[1]: Started cri-containerd-7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626.scope - libcontainer container 7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626. Dec 16 13:17:36.230149 containerd[1552]: time="2025-12-16T13:17:36.230122075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76bc8cc6dd-vx8fr,Uid:1fe93d8b-57a3-4524-abb4-58c7f5835720,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a6a4c1981231a4cda6a5eaae0cbbffaaf1ed2c92c4b85b4f0661f6f0cf8b626\"" Dec 16 13:17:36.232028 containerd[1552]: time="2025-12-16T13:17:36.231986858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:17:36.376037 containerd[1552]: time="2025-12-16T13:17:36.375971359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:36.376734 containerd[1552]: time="2025-12-16T13:17:36.376685406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:17:36.376799 containerd[1552]: time="2025-12-16T13:17:36.376766066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:17:36.376948 kubelet[2710]: E1216 13:17:36.376911 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:17:36.377027 kubelet[2710]: E1216 13:17:36.376963 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:17:36.377124 kubelet[2710]: E1216 13:17:36.377090 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:31c573bb53ca4c21a7bf808bed1d26b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:36.379239 containerd[1552]: time="2025-12-16T13:17:36.379215966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:17:36.502159 containerd[1552]: time="2025-12-16T13:17:36.501933326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:36.503005 containerd[1552]: time="2025-12-16T13:17:36.502937893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:17:36.503097 containerd[1552]: time="2025-12-16T13:17:36.503028672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:17:36.503281 kubelet[2710]: E1216 13:17:36.503225 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:17:36.503726 kubelet[2710]: E1216 13:17:36.503306 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:17:36.503768 kubelet[2710]: E1216 13:17:36.503625 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:36.505029 kubelet[2710]: E1216 13:17:36.504973 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:17:36.560250 kubelet[2710]: I1216 13:17:36.560192 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:17:36.560787 kubelet[2710]: E1216 13:17:36.560757 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:36.563601 kubelet[2710]: E1216 13:17:36.562984 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:17:36.589989 kubelet[2710]: I1216 13:17:36.589935 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:17:36.590880 kubelet[2710]: E1216 13:17:36.590548 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:37.320306 systemd-networkd[1439]: vxlan.calico: Link UP Dec 16 13:17:37.320320 systemd-networkd[1439]: vxlan.calico: Gained carrier Dec 16 13:17:37.422444 kubelet[2710]: I1216 13:17:37.422063 2710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdeecd87-a4fb-4f16-a917-506e0f06769a" path="/var/lib/kubelet/pods/fdeecd87-a4fb-4f16-a917-506e0f06769a/volumes" Dec 16 13:17:37.562519 kubelet[2710]: E1216 13:17:37.561879 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:37.565013 kubelet[2710]: E1216 13:17:37.564976 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:17:37.742173 systemd-networkd[1439]: cali28d42cdeb6c: Gained IPv6LL Dec 16 13:17:38.381789 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Dec 16 13:17:42.420373 containerd[1552]: time="2025-12-16T13:17:42.419816732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-tfn5z,Uid:12ede457-05ac-48b3-a0cb-fee957a57d7a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:17:42.521901 systemd-networkd[1439]: cali57f340ca888: Link UP Dec 16 13:17:42.522099 systemd-networkd[1439]: cali57f340ca888: Gained carrier Dec 16 13:17:42.541185 containerd[1552]: 2025-12-16 13:17:42.458 [INFO][4052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0 calico-apiserver-6c5f65448b- calico-apiserver 12ede457-05ac-48b3-a0cb-fee957a57d7a 809 0 2025-12-16 13:17:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c5f65448b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-20-218 calico-apiserver-6c5f65448b-tfn5z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57f340ca888 [] [] }} ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-" Dec 16 13:17:42.541185 containerd[1552]: 2025-12-16 13:17:42.458 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.541185 containerd[1552]: 2025-12-16 13:17:42.483 [INFO][4065] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" HandleID="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.484 [INFO][4065] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" HandleID="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-20-218", "pod":"calico-apiserver-6c5f65448b-tfn5z", "timestamp":"2025-12-16 13:17:42.483968469 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.484 [INFO][4065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.484 [INFO][4065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.484 [INFO][4065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.489 [INFO][4065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" host="172-232-20-218" Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.494 [INFO][4065] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.498 [INFO][4065] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.502 [INFO][4065] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:42.541426 containerd[1552]: 2025-12-16 13:17:42.504 [INFO][4065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.504 [INFO][4065] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" host="172-232-20-218" Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.506 [INFO][4065] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17 Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.510 [INFO][4065] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" host="172-232-20-218" Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.514 [INFO][4065] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.130/26] block=192.168.36.128/26 handle="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" host="172-232-20-218" Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.514 [INFO][4065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.130/26] handle="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" host="172-232-20-218" Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.514 [INFO][4065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:42.542489 containerd[1552]: 2025-12-16 13:17:42.514 [INFO][4065] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.130/26] IPv6=[] ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" HandleID="k8s-pod-network.03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.542738 containerd[1552]: 2025-12-16 13:17:42.518 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0", GenerateName:"calico-apiserver-6c5f65448b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12ede457-05ac-48b3-a0cb-fee957a57d7a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c5f65448b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"calico-apiserver-6c5f65448b-tfn5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57f340ca888", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:42.542891 containerd[1552]: 2025-12-16 13:17:42.518 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.130/32] ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.542891 containerd[1552]: 2025-12-16 13:17:42.518 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57f340ca888 ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.542891 containerd[1552]: 2025-12-16 13:17:42.521 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.543625 containerd[1552]: 2025-12-16 13:17:42.522 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0", GenerateName:"calico-apiserver-6c5f65448b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12ede457-05ac-48b3-a0cb-fee957a57d7a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c5f65448b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17", Pod:"calico-apiserver-6c5f65448b-tfn5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57f340ca888", MAC:"ca:38:f8:12:5f:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:42.543684 containerd[1552]: 2025-12-16 13:17:42.529 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-tfn5z" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--tfn5z-eth0" Dec 16 13:17:42.576066 containerd[1552]: time="2025-12-16T13:17:42.576033145Z" level=info msg="connecting to shim 03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17" address="unix:///run/containerd/s/507eeb984ddc1908806d0e51dfb2c03ef0efafbb62b7bb2764207c369cc93250" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:42.605688 systemd[1]: Started cri-containerd-03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17.scope - libcontainer container 03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17. Dec 16 13:17:42.671376 containerd[1552]: time="2025-12-16T13:17:42.671155133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-tfn5z,Uid:12ede457-05ac-48b3-a0cb-fee957a57d7a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"03363facc4ed7a49f1e7529612e41ea2e73f1f038ee146c4667cfd5610afdf17\"" Dec 16 13:17:42.673628 containerd[1552]: time="2025-12-16T13:17:42.673601137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:17:42.820147 containerd[1552]: time="2025-12-16T13:17:42.820094724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:42.821089 containerd[1552]: time="2025-12-16T13:17:42.821048101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:17:42.821157 containerd[1552]: time="2025-12-16T13:17:42.821128201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:17:42.821326 kubelet[2710]: E1216 13:17:42.821279 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:42.821326 kubelet[2710]: E1216 13:17:42.821330 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:42.821920 kubelet[2710]: E1216 13:17:42.821448 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95xc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:42.822985 kubelet[2710]: E1216 13:17:42.822946 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:17:43.577436 kubelet[2710]: E1216 13:17:43.577253 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:17:44.270163 systemd-networkd[1439]: cali57f340ca888: Gained IPv6LL Dec 16 13:17:44.419108 kubelet[2710]: E1216 13:17:44.419069 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:44.419826 containerd[1552]: time="2025-12-16T13:17:44.419667451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jmjwt,Uid:032417a5-379f-4884-98d0-f7fd27faf6c6,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:44.419826 containerd[1552]: time="2025-12-16T13:17:44.419673231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4767f495-r8f85,Uid:43dfa291-6618-4cc2-b9da-24c903da3b7c,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:44.537025 systemd-networkd[1439]: calibfe31ef85dd: Link UP Dec 16 13:17:44.537746 systemd-networkd[1439]: calibfe31ef85dd: Gained carrier Dec 16 13:17:44.561636 containerd[1552]: 2025-12-16 13:17:44.474 [INFO][4139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0 calico-kube-controllers-5f4767f495- calico-system 43dfa291-6618-4cc2-b9da-24c903da3b7c 801 0 2025-12-16 13:17:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f4767f495 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-20-218 calico-kube-controllers-5f4767f495-r8f85 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibfe31ef85dd [] [] }} ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-" Dec 16 13:17:44.561636 containerd[1552]: 2025-12-16 13:17:44.474 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.561636 containerd[1552]: 2025-12-16 13:17:44.502 [INFO][4163] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" HandleID="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Workload="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.502 [INFO][4163] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" HandleID="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Workload="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048c090), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-20-218", "pod":"calico-kube-controllers-5f4767f495-r8f85", "timestamp":"2025-12-16 13:17:44.502261887 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.502 [INFO][4163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.502 [INFO][4163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.502 [INFO][4163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.508 [INFO][4163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" host="172-232-20-218" Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.512 [INFO][4163] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.516 [INFO][4163] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.518 [INFO][4163] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.561830 containerd[1552]: 2025-12-16 13:17:44.520 [INFO][4163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.520 [INFO][4163] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" host="172-232-20-218" Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.522 [INFO][4163] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041 Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.525 [INFO][4163] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" host="172-232-20-218" Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4163] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.131/26] block=192.168.36.128/26 handle="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" host="172-232-20-218" Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.131/26] handle="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" host="172-232-20-218" Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:44.562064 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4163] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.131/26] IPv6=[] ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" HandleID="k8s-pod-network.c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Workload="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.562315 containerd[1552]: 2025-12-16 13:17:44.532 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0", GenerateName:"calico-kube-controllers-5f4767f495-", Namespace:"calico-system", SelfLink:"", UID:"43dfa291-6618-4cc2-b9da-24c903da3b7c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4767f495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"calico-kube-controllers-5f4767f495-r8f85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe31ef85dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:44.562375 containerd[1552]: 2025-12-16 13:17:44.532 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.131/32] ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.562375 containerd[1552]: 2025-12-16 13:17:44.532 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfe31ef85dd ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.562375 containerd[1552]: 2025-12-16 13:17:44.539 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.562439 containerd[1552]: 2025-12-16 13:17:44.539 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0", GenerateName:"calico-kube-controllers-5f4767f495-", Namespace:"calico-system", SelfLink:"", UID:"43dfa291-6618-4cc2-b9da-24c903da3b7c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4767f495", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041", Pod:"calico-kube-controllers-5f4767f495-r8f85", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfe31ef85dd", MAC:"32:6e:58:ac:c3:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:44.562488 containerd[1552]: 2025-12-16 13:17:44.556 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" Namespace="calico-system" Pod="calico-kube-controllers-5f4767f495-r8f85" WorkloadEndpoint="172--232--20--218-k8s-calico--kube--controllers--5f4767f495--r8f85-eth0" Dec 16 13:17:44.583192 kubelet[2710]: E1216 13:17:44.582781 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:17:44.593887 containerd[1552]: time="2025-12-16T13:17:44.593824462Z" level=info msg="connecting to shim c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041" address="unix:///run/containerd/s/76df293fd41e2be07a125fc32cac3cca97384122cc9bc14a4edb67196db5fd64" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:44.630830 systemd[1]: Started cri-containerd-c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041.scope - libcontainer container c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041. Dec 16 13:17:44.646486 systemd-networkd[1439]: calib1b379e743e: Link UP Dec 16 13:17:44.646763 systemd-networkd[1439]: calib1b379e743e: Gained carrier Dec 16 13:17:44.664202 containerd[1552]: 2025-12-16 13:17:44.472 [INFO][4142] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0 coredns-668d6bf9bc- kube-system 032417a5-379f-4884-98d0-f7fd27faf6c6 810 0 2025-12-16 13:17:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-20-218 coredns-668d6bf9bc-jmjwt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1b379e743e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-" Dec 16 13:17:44.664202 containerd[1552]: 2025-12-16 13:17:44.472 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664202 containerd[1552]: 2025-12-16 13:17:44.506 [INFO][4161] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" HandleID="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.506 [INFO][4161] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" HandleID="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-20-218", "pod":"coredns-668d6bf9bc-jmjwt", "timestamp":"2025-12-16 13:17:44.506238968 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.506 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.529 [INFO][4161] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.610 [INFO][4161] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" host="172-232-20-218" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.616 [INFO][4161] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.620 [INFO][4161] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.623 [INFO][4161] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.626 [INFO][4161] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:44.664345 containerd[1552]: 2025-12-16 13:17:44.626 [INFO][4161] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" host="172-232-20-218" Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.628 [INFO][4161] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2 Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.632 [INFO][4161] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" host="172-232-20-218" Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.637 [INFO][4161] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.132/26] block=192.168.36.128/26 handle="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" host="172-232-20-218" Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.637 [INFO][4161] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.132/26] handle="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" host="172-232-20-218" Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.637 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:44.664604 containerd[1552]: 2025-12-16 13:17:44.637 [INFO][4161] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.132/26] IPv6=[] ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" HandleID="k8s-pod-network.55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.642 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"032417a5-379f-4884-98d0-f7fd27faf6c6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"coredns-668d6bf9bc-jmjwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b379e743e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.642 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.132/32] ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.642 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1b379e743e ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.646 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.646 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"032417a5-379f-4884-98d0-f7fd27faf6c6", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2", Pod:"coredns-668d6bf9bc-jmjwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b379e743e", MAC:"22:7d:23:d7:fe:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:44.664731 containerd[1552]: 2025-12-16 13:17:44.657 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-jmjwt" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--jmjwt-eth0" Dec 16 13:17:44.696303 containerd[1552]: time="2025-12-16T13:17:44.696214343Z" level=info msg="connecting to shim 55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2" address="unix:///run/containerd/s/e8c77ea96cd02d4d185b98d4ca82acc107c8dd94c5b88e4154577ffa0be59cb2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:44.720814 systemd[1]: Started cri-containerd-55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2.scope - libcontainer container 55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2. Dec 16 13:17:44.758053 containerd[1552]: time="2025-12-16T13:17:44.758017675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4767f495-r8f85,Uid:43dfa291-6618-4cc2-b9da-24c903da3b7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c64d3e86324e261eee9f92935c1b2a31b200b120a422ed1031ab26909e435041\"" Dec 16 13:17:44.761590 containerd[1552]: time="2025-12-16T13:17:44.761546207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:17:44.781241 containerd[1552]: time="2025-12-16T13:17:44.781215763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jmjwt,Uid:032417a5-379f-4884-98d0-f7fd27faf6c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2\"" Dec 16 13:17:44.782284 kubelet[2710]: E1216 13:17:44.782262 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:44.786852 containerd[1552]: time="2025-12-16T13:17:44.786790910Z" level=info msg="CreateContainer within sandbox \"55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:17:44.794637 containerd[1552]: time="2025-12-16T13:17:44.794041564Z" level=info msg="Container c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:44.797282 containerd[1552]: time="2025-12-16T13:17:44.797250037Z" level=info msg="CreateContainer within sandbox \"55d06f26a89e32aea443c73eb2774b5b77db952651fe0b63fde2fba5c43611a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be\"" Dec 16 13:17:44.797768 containerd[1552]: time="2025-12-16T13:17:44.797738936Z" level=info msg="StartContainer for \"c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be\"" Dec 16 13:17:44.798450 containerd[1552]: time="2025-12-16T13:17:44.798417634Z" level=info msg="connecting to shim c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be" address="unix:///run/containerd/s/e8c77ea96cd02d4d185b98d4ca82acc107c8dd94c5b88e4154577ffa0be59cb2" protocol=ttrpc version=3 Dec 16 13:17:44.819720 systemd[1]: Started cri-containerd-c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be.scope - libcontainer container c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be. Dec 16 13:17:44.854407 containerd[1552]: time="2025-12-16T13:17:44.854371409Z" level=info msg="StartContainer for \"c98a37cc675340b1c45fbaae54153ccd04f74ddf50f5df017a0b53cf85d784be\" returns successfully" Dec 16 13:17:45.007786 containerd[1552]: time="2025-12-16T13:17:45.007727627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:45.008690 containerd[1552]: time="2025-12-16T13:17:45.008613215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:17:45.008690 containerd[1552]: time="2025-12-16T13:17:45.008654755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:17:45.008965 kubelet[2710]: E1216 13:17:45.008908 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:17:45.008965 kubelet[2710]: E1216 13:17:45.008963 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:17:45.009186 kubelet[2710]: E1216 13:17:45.009085 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:45.010333 kubelet[2710]: E1216 13:17:45.010296 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:17:45.426594 containerd[1552]: time="2025-12-16T13:17:45.425756450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4xlch,Uid:ec01f64e-62ff-448c-858d-eb1dc0f9f12f,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:45.426594 containerd[1552]: time="2025-12-16T13:17:45.426115230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfzr8,Uid:5fcd65a8-90ec-479e-a0e4-707e3c32e3f8,Namespace:calico-system,Attempt:0,}" Dec 16 13:17:45.426594 containerd[1552]: time="2025-12-16T13:17:45.426224359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-76gbh,Uid:ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:17:45.550900 systemd-networkd[1439]: calibfe31ef85dd: Gained IPv6LL Dec 16 13:17:45.592528 kubelet[2710]: E1216 13:17:45.592128 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:45.600968 kubelet[2710]: E1216 13:17:45.600930 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:17:45.604898 systemd-networkd[1439]: cali8e063a46a5b: Link UP Dec 16 13:17:45.607661 systemd-networkd[1439]: cali8e063a46a5b: Gained carrier Dec 16 13:17:45.637180 kubelet[2710]: I1216 13:17:45.636337 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jmjwt" podStartSLOduration=33.636319459 podStartE2EDuration="33.636319459s" podCreationTimestamp="2025-12-16 13:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:45.610740152 +0000 UTC m=+40.287027875" watchObservedRunningTime="2025-12-16 13:17:45.636319459 +0000 UTC m=+40.312607162" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.483 [INFO][4320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-csi--node--driver--dfzr8-eth0 csi-node-driver- calico-system 5fcd65a8-90ec-479e-a0e4-707e3c32e3f8 704 0 2025-12-16 13:17:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-20-218 csi-node-driver-dfzr8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8e063a46a5b [] [] }} ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.483 [INFO][4320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.536 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" HandleID="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Workload="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.536 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" HandleID="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Workload="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-20-218", "pod":"csi-node-driver-dfzr8", "timestamp":"2025-12-16 13:17:45.536695648 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.536 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.536 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.536 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.548 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.556 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.562 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.568 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.570 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.570 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.572 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389 Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.575 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.133/26] block=192.168.36.128/26 handle="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.133/26] handle="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" host="172-232-20-218" Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:45.644200 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.133/26] IPv6=[] ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" HandleID="k8s-pod-network.58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Workload="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.585 [INFO][4320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-csi--node--driver--dfzr8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"csi-node-driver-dfzr8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8e063a46a5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.586 [INFO][4320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.133/32] ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.586 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e063a46a5b ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.612 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.620 [INFO][4320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-csi--node--driver--dfzr8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5fcd65a8-90ec-479e-a0e4-707e3c32e3f8", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389", Pod:"csi-node-driver-dfzr8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8e063a46a5b", MAC:"2e:ad:ce:0c:a3:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.645438 containerd[1552]: 2025-12-16 13:17:45.640 [INFO][4320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" Namespace="calico-system" Pod="csi-node-driver-dfzr8" WorkloadEndpoint="172--232--20--218-k8s-csi--node--driver--dfzr8-eth0" Dec 16 13:17:45.696873 containerd[1552]: time="2025-12-16T13:17:45.695469305Z" level=info msg="connecting to shim 58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389" address="unix:///run/containerd/s/097f5e28b8411ad71abbd56dadeb2687ee8302876579ffa365ed6abb67670283" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:45.734459 systemd-networkd[1439]: calib52b0c98908: Link UP Dec 16 13:17:45.735532 systemd-networkd[1439]: calib52b0c98908: Gained carrier Dec 16 13:17:45.757958 systemd[1]: Started cri-containerd-58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389.scope - libcontainer container 58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389. Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.499 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0 calico-apiserver-6c5f65448b- calico-apiserver ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59 807 0 2025-12-16 13:17:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c5f65448b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-20-218 calico-apiserver-6c5f65448b-76gbh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib52b0c98908 [] [] }} ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.499 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.544 [INFO][4366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" HandleID="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.545 [INFO][4366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" HandleID="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-20-218", "pod":"calico-apiserver-6c5f65448b-76gbh", "timestamp":"2025-12-16 13:17:45.544217272 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.545 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.579 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.648 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.663 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.671 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.681 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.685 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.685 [INFO][4366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.687 [INFO][4366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5 Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.693 [INFO][4366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.699 [INFO][4366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.134/26] block=192.168.36.128/26 handle="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.700 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.134/26] handle="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" host="172-232-20-218" Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.700 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:45.763864 containerd[1552]: 2025-12-16 13:17:45.700 [INFO][4366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.134/26] IPv6=[] ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" HandleID="k8s-pod-network.76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Workload="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.711 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0", GenerateName:"calico-apiserver-6c5f65448b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c5f65448b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"calico-apiserver-6c5f65448b-76gbh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib52b0c98908", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.716 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.134/32] ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.716 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib52b0c98908 ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.735 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.736 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0", GenerateName:"calico-apiserver-6c5f65448b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c5f65448b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5", Pod:"calico-apiserver-6c5f65448b-76gbh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib52b0c98908", MAC:"a2:83:3b:30:8d:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.764431 containerd[1552]: 2025-12-16 13:17:45.759 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c5f65448b-76gbh" WorkloadEndpoint="172--232--20--218-k8s-calico--apiserver--6c5f65448b--76gbh-eth0" Dec 16 13:17:45.784701 containerd[1552]: time="2025-12-16T13:17:45.784673198Z" level=info msg="connecting to shim 76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5" address="unix:///run/containerd/s/d583db15e6078dd02527647981d7d194aa31ce8aac67016661c9133bfbfe523f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:45.828467 systemd[1]: Started cri-containerd-76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5.scope - libcontainer container 76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5. Dec 16 13:17:45.864439 systemd-networkd[1439]: cali629dd8652a7: Link UP Dec 16 13:17:45.864743 systemd-networkd[1439]: cali629dd8652a7: Gained carrier Dec 16 13:17:45.902806 containerd[1552]: time="2025-12-16T13:17:45.902739310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dfzr8,Uid:5fcd65a8-90ec-479e-a0e4-707e3c32e3f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"58f67b50ffb78be7f80a5cc92bd21f0b8e80796a1a0b5719bdd01d875966d389\"" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.497 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0 goldmane-666569f655- calico-system ec01f64e-62ff-448c-858d-eb1dc0f9f12f 808 0 2025-12-16 13:17:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-20-218 goldmane-666569f655-4xlch eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali629dd8652a7 [] [] }} ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.497 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.561 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" HandleID="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Workload="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.562 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" HandleID="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Workload="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101850), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-20-218", "pod":"goldmane-666569f655-4xlch", "timestamp":"2025-12-16 13:17:45.561270016 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.562 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.700 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.700 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.751 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.777 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.794 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.800 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.807 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.807 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.811 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.820 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.833 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.135/26] block=192.168.36.128/26 handle="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.836 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.135/26] handle="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" host="172-232-20-218" Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.836 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:45.903336 containerd[1552]: 2025-12-16 13:17:45.836 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.135/26] IPv6=[] ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" HandleID="k8s-pod-network.bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Workload="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.854 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ec01f64e-62ff-448c-858d-eb1dc0f9f12f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"goldmane-666569f655-4xlch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali629dd8652a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.856 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.135/32] ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.856 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali629dd8652a7 ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.865 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.866 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ec01f64e-62ff-448c-858d-eb1dc0f9f12f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e", Pod:"goldmane-666569f655-4xlch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali629dd8652a7", MAC:"c2:35:aa:a9:12:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:45.903831 containerd[1552]: 2025-12-16 13:17:45.894 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" Namespace="calico-system" Pod="goldmane-666569f655-4xlch" WorkloadEndpoint="172--232--20--218-k8s-goldmane--666569f655--4xlch-eth0" Dec 16 13:17:45.907621 containerd[1552]: time="2025-12-16T13:17:45.907543200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:17:45.928741 containerd[1552]: time="2025-12-16T13:17:45.928714806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c5f65448b-76gbh,Uid:ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"76fc4aa3d3bdcaa7e6c22ced96671c8a2703a0ab098587567839e3ff8eddf5e5\"" Dec 16 13:17:45.935007 containerd[1552]: time="2025-12-16T13:17:45.934869483Z" level=info msg="connecting to shim bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e" address="unix:///run/containerd/s/faa28aa2b39252b0546ce8b9644ed70c0b7683f5382b326814664e5550fc96a1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:45.975703 systemd[1]: Started cri-containerd-bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e.scope - libcontainer container bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e. Dec 16 13:17:46.029867 containerd[1552]: time="2025-12-16T13:17:46.029395518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4xlch,Uid:ec01f64e-62ff-448c-858d-eb1dc0f9f12f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc773ace669352694a177236b30b80eaeabb8720979287f50d1e124816192a9e\"" Dec 16 13:17:46.044611 containerd[1552]: time="2025-12-16T13:17:46.044516459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:46.045340 containerd[1552]: time="2025-12-16T13:17:46.045313227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:17:46.045469 containerd[1552]: time="2025-12-16T13:17:46.045368737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:17:46.045517 kubelet[2710]: E1216 13:17:46.045477 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:17:46.045517 kubelet[2710]: E1216 13:17:46.045514 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:17:46.045825 kubelet[2710]: E1216 13:17:46.045784 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:46.047162 containerd[1552]: time="2025-12-16T13:17:46.046743514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:17:46.178861 containerd[1552]: time="2025-12-16T13:17:46.178783974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:46.179695 containerd[1552]: time="2025-12-16T13:17:46.179655803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:17:46.179755 containerd[1552]: time="2025-12-16T13:17:46.179732403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:17:46.180076 kubelet[2710]: E1216 13:17:46.180025 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:46.180150 kubelet[2710]: E1216 13:17:46.180082 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:46.180387 kubelet[2710]: E1216 13:17:46.180323 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpp8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:46.181003 containerd[1552]: time="2025-12-16T13:17:46.180901510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:17:46.182258 kubelet[2710]: E1216 13:17:46.182226 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:17:46.190371 systemd-networkd[1439]: calib1b379e743e: Gained IPv6LL Dec 16 13:17:46.325979 containerd[1552]: time="2025-12-16T13:17:46.325841906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:46.327186 containerd[1552]: time="2025-12-16T13:17:46.327145173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:17:46.327285 containerd[1552]: time="2025-12-16T13:17:46.327166173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:17:46.327490 kubelet[2710]: E1216 13:17:46.327432 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:17:46.327656 kubelet[2710]: E1216 13:17:46.327497 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:17:46.327853 containerd[1552]: time="2025-12-16T13:17:46.327832202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:17:46.329451 kubelet[2710]: E1216 13:17:46.329110 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:46.330630 kubelet[2710]: E1216 13:17:46.330466 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:17:46.419260 kubelet[2710]: E1216 13:17:46.419217 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:46.420317 containerd[1552]: time="2025-12-16T13:17:46.420214620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq7np,Uid:e3ebb4c2-f22e-4173-95bd-50b79113c15a,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:46.470331 containerd[1552]: time="2025-12-16T13:17:46.470265072Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:46.471955 containerd[1552]: time="2025-12-16T13:17:46.471918288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:17:46.472051 containerd[1552]: time="2025-12-16T13:17:46.472003888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:17:46.472260 kubelet[2710]: E1216 13:17:46.472226 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:17:46.472371 kubelet[2710]: E1216 13:17:46.472355 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:17:46.472554 kubelet[2710]: E1216 13:17:46.472517 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:46.474398 kubelet[2710]: E1216 13:17:46.474365 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:46.535802 systemd-networkd[1439]: cali7b01cbeca34: Link UP Dec 16 13:17:46.536788 systemd-networkd[1439]: cali7b01cbeca34: Gained carrier Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.467 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0 coredns-668d6bf9bc- kube-system e3ebb4c2-f22e-4173-95bd-50b79113c15a 806 0 2025-12-16 13:17:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-20-218 coredns-668d6bf9bc-tq7np eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b01cbeca34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.467 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.493 [INFO][4567] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" HandleID="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.493 [INFO][4567] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" HandleID="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-20-218", "pod":"coredns-668d6bf9bc-tq7np", "timestamp":"2025-12-16 13:17:46.493350766 +0000 UTC"}, Hostname:"172-232-20-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.493 [INFO][4567] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.493 [INFO][4567] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.493 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-20-218' Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.503 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.508 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.512 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.513 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.515 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.36.128/26 host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.515 [INFO][4567] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.36.128/26 handle="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.516 [INFO][4567] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69 Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.519 [INFO][4567] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.36.128/26 handle="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.527 [INFO][4567] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.36.136/26] block=192.168.36.128/26 handle="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.527 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.36.136/26] handle="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" host="172-232-20-218" Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.527 [INFO][4567] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:17:46.566291 containerd[1552]: 2025-12-16 13:17:46.527 [INFO][4567] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.36.136/26] IPv6=[] ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" HandleID="k8s-pod-network.5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Workload="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.530 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3ebb4c2-f22e-4173-95bd-50b79113c15a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"", Pod:"coredns-668d6bf9bc-tq7np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b01cbeca34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.530 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.136/32] ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.531 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b01cbeca34 ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.534 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.534 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3ebb4c2-f22e-4173-95bd-50b79113c15a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 17, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-20-218", ContainerID:"5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69", Pod:"coredns-668d6bf9bc-tq7np", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b01cbeca34", MAC:"7e:7f:a3:0e:a7:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:17:46.567326 containerd[1552]: 2025-12-16 13:17:46.548 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" Namespace="kube-system" Pod="coredns-668d6bf9bc-tq7np" WorkloadEndpoint="172--232--20--218-k8s-coredns--668d6bf9bc--tq7np-eth0" Dec 16 13:17:46.599951 containerd[1552]: time="2025-12-16T13:17:46.599817317Z" level=info msg="connecting to shim 5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69" address="unix:///run/containerd/s/c083baa0e557466e85f30a903d19b62ad40bc0a96b34dfb35cb55122f012207d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:46.609056 kubelet[2710]: E1216 13:17:46.609025 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:17:46.620752 kubelet[2710]: E1216 13:17:46.620724 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:17:46.634801 kubelet[2710]: E1216 13:17:46.634746 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:46.638785 kubelet[2710]: E1216 13:17:46.638758 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:17:46.641030 kubelet[2710]: E1216 13:17:46.640994 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:46.658695 systemd[1]: Started cri-containerd-5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69.scope - libcontainer container 5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69. Dec 16 13:17:46.736665 containerd[1552]: time="2025-12-16T13:17:46.736602478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tq7np,Uid:e3ebb4c2-f22e-4173-95bd-50b79113c15a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69\"" Dec 16 13:17:46.737521 kubelet[2710]: E1216 13:17:46.737499 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:46.739417 containerd[1552]: time="2025-12-16T13:17:46.739399153Z" level=info msg="CreateContainer within sandbox \"5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:17:46.749613 containerd[1552]: time="2025-12-16T13:17:46.747845546Z" level=info msg="Container 5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:46.754890 containerd[1552]: time="2025-12-16T13:17:46.754857382Z" level=info msg="CreateContainer within sandbox \"5ef46a41eb5dff6ba4b13ae9c6a4f388fc42de65aa174f7d4ee527d2c2e63f69\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a\"" Dec 16 13:17:46.755557 containerd[1552]: time="2025-12-16T13:17:46.755531501Z" level=info msg="StartContainer for \"5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a\"" Dec 16 13:17:46.756206 containerd[1552]: time="2025-12-16T13:17:46.756188450Z" level=info msg="connecting to shim 5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a" address="unix:///run/containerd/s/c083baa0e557466e85f30a903d19b62ad40bc0a96b34dfb35cb55122f012207d" protocol=ttrpc version=3 Dec 16 13:17:46.781682 systemd[1]: Started cri-containerd-5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a.scope - libcontainer container 5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a. Dec 16 13:17:46.813702 containerd[1552]: time="2025-12-16T13:17:46.813639737Z" level=info msg="StartContainer for \"5c32928ed49c29d4d58e321bf272c78517fc8a69659db73fe9c788934a47a16a\" returns successfully" Dec 16 13:17:47.149810 systemd-networkd[1439]: cali8e063a46a5b: Gained IPv6LL Dec 16 13:17:47.213766 systemd-networkd[1439]: calib52b0c98908: Gained IPv6LL Dec 16 13:17:47.469730 systemd-networkd[1439]: cali629dd8652a7: Gained IPv6LL Dec 16 13:17:47.625763 kubelet[2710]: E1216 13:17:47.625398 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:47.625763 kubelet[2710]: E1216 13:17:47.625404 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:47.627357 kubelet[2710]: E1216 13:17:47.627293 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:17:47.627549 kubelet[2710]: E1216 13:17:47.627441 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:17:47.628006 kubelet[2710]: E1216 13:17:47.627980 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:17:47.656757 kubelet[2710]: I1216 13:17:47.656702 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tq7np" podStartSLOduration=35.65668594 podStartE2EDuration="35.65668594s" podCreationTimestamp="2025-12-16 13:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:17:47.64604748 +0000 UTC m=+42.322335193" watchObservedRunningTime="2025-12-16 13:17:47.65668594 +0000 UTC m=+42.332973653" Dec 16 13:17:48.109788 systemd-networkd[1439]: cali7b01cbeca34: Gained IPv6LL Dec 16 13:17:48.232203 kubelet[2710]: I1216 13:17:48.231614 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:17:48.232203 kubelet[2710]: E1216 13:17:48.232077 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:48.627830 kubelet[2710]: E1216 13:17:48.627255 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:48.629190 kubelet[2710]: E1216 13:17:48.629107 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:49.628889 kubelet[2710]: E1216 13:17:49.628860 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:17:51.421763 containerd[1552]: time="2025-12-16T13:17:51.421278431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:17:51.558343 containerd[1552]: time="2025-12-16T13:17:51.558303666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:51.559647 containerd[1552]: time="2025-12-16T13:17:51.559616264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:17:51.561682 containerd[1552]: time="2025-12-16T13:17:51.559637734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:17:51.562027 kubelet[2710]: E1216 13:17:51.561969 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:17:51.562027 kubelet[2710]: E1216 13:17:51.562008 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:17:51.562983 kubelet[2710]: E1216 13:17:51.562651 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:31c573bb53ca4c21a7bf808bed1d26b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:51.564762 containerd[1552]: time="2025-12-16T13:17:51.564589357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:17:51.720911 containerd[1552]: time="2025-12-16T13:17:51.720799164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:51.724654 containerd[1552]: time="2025-12-16T13:17:51.724515219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:17:51.724654 containerd[1552]: time="2025-12-16T13:17:51.724613689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:17:51.726044 kubelet[2710]: E1216 13:17:51.725944 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:17:51.726578 kubelet[2710]: E1216 13:17:51.726294 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:17:51.726578 kubelet[2710]: E1216 13:17:51.726392 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:51.728572 kubelet[2710]: E1216 13:17:51.727667 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:17:56.421826 containerd[1552]: time="2025-12-16T13:17:56.421782373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:17:56.560861 containerd[1552]: time="2025-12-16T13:17:56.560691480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:56.561945 containerd[1552]: time="2025-12-16T13:17:56.561914759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:17:56.562381 containerd[1552]: time="2025-12-16T13:17:56.562046569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:17:56.562670 kubelet[2710]: E1216 13:17:56.562619 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:56.563199 kubelet[2710]: E1216 13:17:56.562873 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:17:56.564192 kubelet[2710]: E1216 13:17:56.563932 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95xc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:56.565469 kubelet[2710]: E1216 13:17:56.565434 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:17:57.424005 containerd[1552]: time="2025-12-16T13:17:57.423405778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:17:57.563769 containerd[1552]: time="2025-12-16T13:17:57.563637142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:57.565014 containerd[1552]: time="2025-12-16T13:17:57.564963461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:17:57.565255 containerd[1552]: time="2025-12-16T13:17:57.565132391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:17:57.565599 kubelet[2710]: E1216 13:17:57.565540 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:17:57.566193 kubelet[2710]: E1216 13:17:57.565611 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:17:57.566193 kubelet[2710]: E1216 13:17:57.565759 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:57.566984 kubelet[2710]: E1216 13:17:57.566944 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:17:58.419725 containerd[1552]: time="2025-12-16T13:17:58.419450361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:17:58.559586 containerd[1552]: time="2025-12-16T13:17:58.558600984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:58.559988 containerd[1552]: time="2025-12-16T13:17:58.559616094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:17:58.559988 containerd[1552]: time="2025-12-16T13:17:58.559687223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:17:58.560231 kubelet[2710]: E1216 13:17:58.560189 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:17:58.560309 kubelet[2710]: E1216 13:17:58.560241 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:17:58.560378 kubelet[2710]: E1216 13:17:58.560348 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:58.562145 containerd[1552]: time="2025-12-16T13:17:58.562121531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:17:58.692682 containerd[1552]: time="2025-12-16T13:17:58.692405603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:17:58.693996 containerd[1552]: time="2025-12-16T13:17:58.693891002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:17:58.693996 containerd[1552]: time="2025-12-16T13:17:58.693972822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:17:58.694298 kubelet[2710]: E1216 13:17:58.694246 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:17:58.695419 kubelet[2710]: E1216 13:17:58.694685 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:17:58.695547 kubelet[2710]: E1216 13:17:58.695511 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:17:58.697093 kubelet[2710]: E1216 13:17:58.696932 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:18:00.421584 containerd[1552]: time="2025-12-16T13:18:00.420414580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:18:00.565327 containerd[1552]: time="2025-12-16T13:18:00.565277964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:00.566445 containerd[1552]: time="2025-12-16T13:18:00.566409584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:18:00.566512 containerd[1552]: time="2025-12-16T13:18:00.566464884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:00.566801 kubelet[2710]: E1216 13:18:00.566762 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:00.567128 kubelet[2710]: E1216 13:18:00.566815 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:00.567128 kubelet[2710]: E1216 13:18:00.566939 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:00.568358 kubelet[2710]: E1216 13:18:00.568329 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:18:01.421789 containerd[1552]: time="2025-12-16T13:18:01.421638172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:18:01.575781 containerd[1552]: time="2025-12-16T13:18:01.575739457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:01.576955 containerd[1552]: time="2025-12-16T13:18:01.576909706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:18:01.577046 containerd[1552]: time="2025-12-16T13:18:01.576995356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:01.577440 kubelet[2710]: E1216 13:18:01.577402 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:01.577760 kubelet[2710]: E1216 13:18:01.577474 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:01.577760 kubelet[2710]: E1216 13:18:01.577637 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpp8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:01.578887 kubelet[2710]: E1216 13:18:01.578852 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:18:04.422076 kubelet[2710]: E1216 13:18:04.422029 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:18:08.421956 kubelet[2710]: E1216 13:18:08.421348 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:18:11.424017 kubelet[2710]: E1216 13:18:11.423937 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:18:11.425640 kubelet[2710]: E1216 13:18:11.425613 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:18:13.424366 kubelet[2710]: E1216 13:18:13.424313 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:18:14.420845 kubelet[2710]: E1216 13:18:14.420285 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:18:15.419596 kubelet[2710]: E1216 13:18:15.419438 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:18:15.420441 kubelet[2710]: E1216 13:18:15.420387 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:18:16.421685 containerd[1552]: time="2025-12-16T13:18:16.421595967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:18:16.566017 containerd[1552]: time="2025-12-16T13:18:16.565961042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:16.566892 containerd[1552]: time="2025-12-16T13:18:16.566851465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:18:16.566935 containerd[1552]: time="2025-12-16T13:18:16.566926844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:18:16.567090 kubelet[2710]: E1216 13:18:16.567060 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:16.567671 kubelet[2710]: E1216 13:18:16.567437 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:16.567671 kubelet[2710]: E1216 13:18:16.567543 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:31c573bb53ca4c21a7bf808bed1d26b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:16.569602 containerd[1552]: time="2025-12-16T13:18:16.569513273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:18:16.708515 containerd[1552]: time="2025-12-16T13:18:16.708174660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:16.709201 containerd[1552]: time="2025-12-16T13:18:16.709087322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:18:16.709201 containerd[1552]: time="2025-12-16T13:18:16.709172521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:18:16.709363 kubelet[2710]: E1216 13:18:16.709326 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:16.709418 kubelet[2710]: E1216 13:18:16.709408 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:16.709697 kubelet[2710]: E1216 13:18:16.709661 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:16.711057 kubelet[2710]: E1216 13:18:16.711004 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:18:22.418856 kubelet[2710]: E1216 13:18:22.418814 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:18:23.422174 containerd[1552]: time="2025-12-16T13:18:23.421895273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:18:23.563815 containerd[1552]: time="2025-12-16T13:18:23.563767706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:23.565131 containerd[1552]: time="2025-12-16T13:18:23.565055046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:18:23.565243 containerd[1552]: time="2025-12-16T13:18:23.565108115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:18:23.565407 kubelet[2710]: E1216 13:18:23.565355 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:18:23.565842 kubelet[2710]: E1216 13:18:23.565418 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:18:23.565842 kubelet[2710]: E1216 13:18:23.565635 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:23.566900 kubelet[2710]: E1216 13:18:23.566848 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:18:25.421424 containerd[1552]: time="2025-12-16T13:18:25.420487600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:18:25.556556 containerd[1552]: time="2025-12-16T13:18:25.556497283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:25.557794 containerd[1552]: time="2025-12-16T13:18:25.557621316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:18:25.558479 containerd[1552]: time="2025-12-16T13:18:25.557673665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:25.558687 kubelet[2710]: E1216 13:18:25.558638 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:25.559246 kubelet[2710]: E1216 13:18:25.558690 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:25.559283 containerd[1552]: time="2025-12-16T13:18:25.559054974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:18:25.559363 kubelet[2710]: E1216 13:18:25.559322 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpp8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:25.560666 kubelet[2710]: E1216 13:18:25.560628 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:18:25.692007 containerd[1552]: time="2025-12-16T13:18:25.691848766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:25.693093 containerd[1552]: time="2025-12-16T13:18:25.693034948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:18:25.693242 containerd[1552]: time="2025-12-16T13:18:25.693122176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:25.693480 kubelet[2710]: E1216 13:18:25.693431 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:25.693536 kubelet[2710]: E1216 13:18:25.693479 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:25.694072 kubelet[2710]: E1216 13:18:25.693617 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95xc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:25.694833 kubelet[2710]: E1216 13:18:25.694785 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:18:26.420169 containerd[1552]: time="2025-12-16T13:18:26.420032256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:18:26.736634 containerd[1552]: time="2025-12-16T13:18:26.736467454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:26.737596 containerd[1552]: time="2025-12-16T13:18:26.737512188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:18:26.737651 containerd[1552]: time="2025-12-16T13:18:26.737600297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:18:26.737806 kubelet[2710]: E1216 13:18:26.737753 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:18:26.738423 kubelet[2710]: E1216 13:18:26.737815 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:18:26.738461 containerd[1552]: time="2025-12-16T13:18:26.738095669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:18:26.738681 kubelet[2710]: E1216 13:18:26.738475 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:26.871677 containerd[1552]: time="2025-12-16T13:18:26.871630394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:26.872900 containerd[1552]: time="2025-12-16T13:18:26.872759337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:18:26.873059 containerd[1552]: time="2025-12-16T13:18:26.872813036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:26.873278 kubelet[2710]: E1216 13:18:26.873242 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:26.873355 kubelet[2710]: E1216 13:18:26.873294 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:26.873678 kubelet[2710]: E1216 13:18:26.873577 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:26.874474 containerd[1552]: time="2025-12-16T13:18:26.874451492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:18:26.876856 kubelet[2710]: E1216 13:18:26.875546 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:18:27.011601 containerd[1552]: time="2025-12-16T13:18:27.011438709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:27.012914 containerd[1552]: time="2025-12-16T13:18:27.012863728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:18:27.013054 containerd[1552]: time="2025-12-16T13:18:27.012971497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:18:27.013192 kubelet[2710]: E1216 13:18:27.013116 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:18:27.013250 kubelet[2710]: E1216 13:18:27.013195 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:18:27.013403 kubelet[2710]: E1216 13:18:27.013346 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:27.014890 kubelet[2710]: E1216 13:18:27.014849 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:18:31.422851 kubelet[2710]: E1216 13:18:31.422775 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:18:36.420527 kubelet[2710]: E1216 13:18:36.420235 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:18:37.421944 kubelet[2710]: E1216 13:18:37.421897 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:18:38.419957 kubelet[2710]: E1216 13:18:38.419854 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:18:39.421413 kubelet[2710]: E1216 13:18:39.421251 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:18:40.419578 kubelet[2710]: E1216 13:18:40.419519 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:18:42.420685 kubelet[2710]: E1216 13:18:42.420613 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:18:45.423216 kubelet[2710]: E1216 13:18:45.423138 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:18:48.420582 kubelet[2710]: E1216 13:18:48.420518 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:18:49.420967 kubelet[2710]: E1216 13:18:49.420843 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:18:50.420018 kubelet[2710]: E1216 13:18:50.419978 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:18:50.422955 kubelet[2710]: E1216 13:18:50.422913 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:18:50.423701 kubelet[2710]: E1216 13:18:50.423036 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:18:55.421916 kubelet[2710]: E1216 13:18:55.421166 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:18:57.420669 containerd[1552]: time="2025-12-16T13:18:57.420613948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:18:57.786017 containerd[1552]: time="2025-12-16T13:18:57.785729919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:57.787657 containerd[1552]: time="2025-12-16T13:18:57.786666983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:18:57.787657 containerd[1552]: time="2025-12-16T13:18:57.786784472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:18:57.787768 kubelet[2710]: E1216 13:18:57.787190 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:57.787768 kubelet[2710]: E1216 13:18:57.787235 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:57.787768 kubelet[2710]: E1216 13:18:57.787338 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:31c573bb53ca4c21a7bf808bed1d26b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:57.789858 containerd[1552]: time="2025-12-16T13:18:57.789716912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:18:57.928550 containerd[1552]: time="2025-12-16T13:18:57.928482091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:57.929760 containerd[1552]: time="2025-12-16T13:18:57.929550744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:18:57.929760 containerd[1552]: time="2025-12-16T13:18:57.929692283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:18:57.930055 kubelet[2710]: E1216 13:18:57.930020 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:57.930109 kubelet[2710]: E1216 13:18:57.930087 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:57.931248 kubelet[2710]: E1216 13:18:57.931204 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lblq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76bc8cc6dd-vx8fr_calico-system(1fe93d8b-57a3-4524-abb4-58c7f5835720): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:57.932394 kubelet[2710]: E1216 13:18:57.932345 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:18:59.420508 kubelet[2710]: E1216 13:18:59.420458 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:19:01.421599 kubelet[2710]: E1216 13:19:01.421307 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:19:01.423141 kubelet[2710]: E1216 13:19:01.421606 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:19:04.420162 containerd[1552]: time="2025-12-16T13:19:04.420099176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:19:04.608455 containerd[1552]: time="2025-12-16T13:19:04.608406108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:04.609502 containerd[1552]: time="2025-12-16T13:19:04.609454952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:19:04.609595 containerd[1552]: time="2025-12-16T13:19:04.609549432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:19:04.609837 kubelet[2710]: E1216 13:19:04.609791 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:04.610178 kubelet[2710]: E1216 13:19:04.609855 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:04.610785 kubelet[2710]: E1216 13:19:04.610717 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f4767f495-r8f85_calico-system(43dfa291-6618-4cc2-b9da-24c903da3b7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:04.612456 kubelet[2710]: E1216 13:19:04.612423 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:19:09.422223 kubelet[2710]: E1216 13:19:09.422188 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:10.419598 containerd[1552]: time="2025-12-16T13:19:10.419405781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:19:10.559828 containerd[1552]: time="2025-12-16T13:19:10.559787732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:10.560667 containerd[1552]: time="2025-12-16T13:19:10.560638177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:19:10.560754 containerd[1552]: time="2025-12-16T13:19:10.560656807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:19:10.560835 kubelet[2710]: E1216 13:19:10.560807 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:10.561167 kubelet[2710]: E1216 13:19:10.560844 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:10.561167 kubelet[2710]: E1216 13:19:10.560932 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:10.563124 containerd[1552]: time="2025-12-16T13:19:10.563102064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:19:10.694675 containerd[1552]: time="2025-12-16T13:19:10.694240304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:10.697241 containerd[1552]: time="2025-12-16T13:19:10.697211118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:19:10.697302 containerd[1552]: time="2025-12-16T13:19:10.697288698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:19:10.698765 kubelet[2710]: E1216 13:19:10.698723 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:10.698812 kubelet[2710]: E1216 13:19:10.698778 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:10.698929 kubelet[2710]: E1216 13:19:10.698892 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2nr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dfzr8_calico-system(5fcd65a8-90ec-479e-a0e4-707e3c32e3f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:10.700237 kubelet[2710]: E1216 13:19:10.700208 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:19:11.420840 kubelet[2710]: E1216 13:19:11.420000 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:11.421880 kubelet[2710]: E1216 13:19:11.421782 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:19:13.419942 kubelet[2710]: E1216 13:19:13.419898 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:13.421947 containerd[1552]: time="2025-12-16T13:19:13.421922825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:13.590772 containerd[1552]: time="2025-12-16T13:19:13.590720122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:13.591682 containerd[1552]: time="2025-12-16T13:19:13.591653438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:13.591760 containerd[1552]: time="2025-12-16T13:19:13.591749297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:13.591968 kubelet[2710]: E1216 13:19:13.591924 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:13.592058 kubelet[2710]: E1216 13:19:13.591978 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:13.592118 kubelet[2710]: E1216 13:19:13.592075 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95xc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-tfn5z_calico-apiserver(12ede457-05ac-48b3-a0cb-fee957a57d7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:13.593406 kubelet[2710]: E1216 13:19:13.593380 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:19:15.423819 containerd[1552]: time="2025-12-16T13:19:15.422101128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:15.579616 containerd[1552]: time="2025-12-16T13:19:15.579275591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:15.580804 containerd[1552]: time="2025-12-16T13:19:15.580777144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:15.581085 containerd[1552]: time="2025-12-16T13:19:15.581046222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:15.581379 kubelet[2710]: E1216 13:19:15.581326 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:15.582274 kubelet[2710]: E1216 13:19:15.581393 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:15.582274 kubelet[2710]: E1216 13:19:15.582230 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpp8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c5f65448b-76gbh_calico-apiserver(ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:15.582643 containerd[1552]: time="2025-12-16T13:19:15.581722009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:19:15.583547 kubelet[2710]: E1216 13:19:15.583521 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:19:15.712657 containerd[1552]: time="2025-12-16T13:19:15.712203582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:15.713912 containerd[1552]: time="2025-12-16T13:19:15.713680955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:19:15.713912 containerd[1552]: time="2025-12-16T13:19:15.713756745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:15.715586 kubelet[2710]: E1216 13:19:15.713987 2710 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:15.715586 kubelet[2710]: E1216 13:19:15.714071 2710 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:15.715586 kubelet[2710]: E1216 13:19:15.714313 2710 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4xlch_calico-system(ec01f64e-62ff-448c-858d-eb1dc0f9f12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:15.715743 kubelet[2710]: E1216 13:19:15.715690 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:19:18.115727 systemd[1]: Started sshd@7-172.232.20.218:22-139.178.89.65:42498.service - OpenSSH per-connection server daemon (139.178.89.65:42498). Dec 16 13:19:18.419028 kubelet[2710]: E1216 13:19:18.418824 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:18.487490 sshd[4843]: Accepted publickey for core from 139.178.89.65 port 42498 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:18.489762 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:18.495966 systemd-logind[1525]: New session 8 of user core. Dec 16 13:19:18.501954 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:19:18.871704 sshd[4872]: Connection closed by 139.178.89.65 port 42498 Dec 16 13:19:18.872089 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:18.877961 systemd[1]: sshd@7-172.232.20.218:22-139.178.89.65:42498.service: Deactivated successfully. Dec 16 13:19:18.882332 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:19:18.884551 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:19:18.886869 systemd-logind[1525]: Removed session 8. Dec 16 13:19:19.421276 kubelet[2710]: E1216 13:19:19.421238 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:19:22.419398 kubelet[2710]: E1216 13:19:22.419343 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:22.421979 kubelet[2710]: E1216 13:19:22.421939 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:19:23.936264 systemd[1]: Started sshd@8-172.232.20.218:22-139.178.89.65:41388.service - OpenSSH per-connection server daemon (139.178.89.65:41388). Dec 16 13:19:24.304072 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 41388 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:24.308175 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:24.320408 systemd-logind[1525]: New session 9 of user core. Dec 16 13:19:24.325728 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:19:24.419920 kubelet[2710]: E1216 13:19:24.419665 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:19:24.421610 kubelet[2710]: E1216 13:19:24.421491 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:19:24.628449 sshd[4889]: Connection closed by 139.178.89.65 port 41388 Dec 16 13:19:24.630472 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:24.635462 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:19:24.636423 systemd[1]: sshd@8-172.232.20.218:22-139.178.89.65:41388.service: Deactivated successfully. Dec 16 13:19:24.638866 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:19:24.640492 systemd-logind[1525]: Removed session 9. Dec 16 13:19:26.422154 kubelet[2710]: E1216 13:19:26.422092 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:19:29.694248 systemd[1]: Started sshd@9-172.232.20.218:22-139.178.89.65:41398.service - OpenSSH per-connection server daemon (139.178.89.65:41398). Dec 16 13:19:30.054452 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 41398 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:30.057354 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:30.064366 systemd-logind[1525]: New session 10 of user core. Dec 16 13:19:30.071827 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:19:30.386601 sshd[4908]: Connection closed by 139.178.89.65 port 41398 Dec 16 13:19:30.386186 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:30.392546 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:19:30.394228 systemd[1]: sshd@9-172.232.20.218:22-139.178.89.65:41398.service: Deactivated successfully. Dec 16 13:19:30.399084 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:19:30.404637 systemd-logind[1525]: Removed session 10. Dec 16 13:19:30.449753 systemd[1]: Started sshd@10-172.232.20.218:22-139.178.89.65:60942.service - OpenSSH per-connection server daemon (139.178.89.65:60942). Dec 16 13:19:30.806203 sshd[4922]: Accepted publickey for core from 139.178.89.65 port 60942 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:30.807537 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:30.812164 systemd-logind[1525]: New session 11 of user core. Dec 16 13:19:30.820719 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:19:31.166667 sshd[4925]: Connection closed by 139.178.89.65 port 60942 Dec 16 13:19:31.167877 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:31.172791 systemd[1]: sshd@10-172.232.20.218:22-139.178.89.65:60942.service: Deactivated successfully. Dec 16 13:19:31.175941 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:19:31.177051 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:19:31.178494 systemd-logind[1525]: Removed session 11. Dec 16 13:19:31.232144 systemd[1]: Started sshd@11-172.232.20.218:22-139.178.89.65:60946.service - OpenSSH per-connection server daemon (139.178.89.65:60946). Dec 16 13:19:31.421758 kubelet[2710]: E1216 13:19:31.421613 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:19:31.595587 sshd[4935]: Accepted publickey for core from 139.178.89.65 port 60946 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:31.596504 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:31.603522 systemd-logind[1525]: New session 12 of user core. Dec 16 13:19:31.609977 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:19:31.922068 sshd[4938]: Connection closed by 139.178.89.65 port 60946 Dec 16 13:19:31.924642 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:31.930364 systemd[1]: sshd@11-172.232.20.218:22-139.178.89.65:60946.service: Deactivated successfully. Dec 16 13:19:31.930394 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:19:31.934476 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:19:31.939126 systemd-logind[1525]: Removed session 12. Dec 16 13:19:34.420596 kubelet[2710]: E1216 13:19:34.420523 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:19:35.425556 kubelet[2710]: E1216 13:19:35.424692 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:19:36.984988 systemd[1]: Started sshd@12-172.232.20.218:22-139.178.89.65:60952.service - OpenSSH per-connection server daemon (139.178.89.65:60952). Dec 16 13:19:37.328494 sshd[4950]: Accepted publickey for core from 139.178.89.65 port 60952 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:37.330218 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:37.336762 systemd-logind[1525]: New session 13 of user core. Dec 16 13:19:37.341687 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:19:37.424206 kubelet[2710]: E1216 13:19:37.424078 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:19:37.426595 kubelet[2710]: E1216 13:19:37.426277 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:19:37.635472 sshd[4953]: Connection closed by 139.178.89.65 port 60952 Dec 16 13:19:37.636341 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:37.641448 systemd[1]: sshd@12-172.232.20.218:22-139.178.89.65:60952.service: Deactivated successfully. Dec 16 13:19:37.643662 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:19:37.644466 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:19:37.646458 systemd-logind[1525]: Removed session 13. Dec 16 13:19:37.699624 systemd[1]: Started sshd@13-172.232.20.218:22-139.178.89.65:60968.service - OpenSSH per-connection server daemon (139.178.89.65:60968). Dec 16 13:19:38.066313 sshd[4965]: Accepted publickey for core from 139.178.89.65 port 60968 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:38.068839 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:38.075223 systemd-logind[1525]: New session 14 of user core. Dec 16 13:19:38.081916 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:19:38.422886 kubelet[2710]: E1216 13:19:38.422850 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:19:38.492969 sshd[4968]: Connection closed by 139.178.89.65 port 60968 Dec 16 13:19:38.493630 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:38.500869 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:19:38.501807 systemd[1]: sshd@13-172.232.20.218:22-139.178.89.65:60968.service: Deactivated successfully. Dec 16 13:19:38.507268 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:19:38.510044 systemd-logind[1525]: Removed session 14. Dec 16 13:19:38.553754 systemd[1]: Started sshd@14-172.232.20.218:22-139.178.89.65:60980.service - OpenSSH per-connection server daemon (139.178.89.65:60980). Dec 16 13:19:38.909243 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 60980 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:38.910807 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:38.918914 systemd-logind[1525]: New session 15 of user core. Dec 16 13:19:38.929744 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:19:39.810672 sshd[4981]: Connection closed by 139.178.89.65 port 60980 Dec 16 13:19:39.811501 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:39.815756 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:19:39.818008 systemd[1]: sshd@14-172.232.20.218:22-139.178.89.65:60980.service: Deactivated successfully. Dec 16 13:19:39.822173 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:19:39.826302 systemd-logind[1525]: Removed session 15. Dec 16 13:19:39.877759 systemd[1]: Started sshd@15-172.232.20.218:22-139.178.89.65:60994.service - OpenSSH per-connection server daemon (139.178.89.65:60994). Dec 16 13:19:40.228538 sshd[4999]: Accepted publickey for core from 139.178.89.65 port 60994 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:40.231446 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:40.238220 systemd-logind[1525]: New session 16 of user core. Dec 16 13:19:40.244679 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:19:40.633655 sshd[5002]: Connection closed by 139.178.89.65 port 60994 Dec 16 13:19:40.636729 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:40.642471 systemd[1]: sshd@15-172.232.20.218:22-139.178.89.65:60994.service: Deactivated successfully. Dec 16 13:19:40.642663 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:19:40.645937 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:19:40.649802 systemd-logind[1525]: Removed session 16. Dec 16 13:19:40.699258 systemd[1]: Started sshd@16-172.232.20.218:22-139.178.89.65:35260.service - OpenSSH per-connection server daemon (139.178.89.65:35260). Dec 16 13:19:41.051149 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 35260 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:41.054193 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:41.065204 systemd-logind[1525]: New session 17 of user core. Dec 16 13:19:41.070712 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:19:41.383637 sshd[5015]: Connection closed by 139.178.89.65 port 35260 Dec 16 13:19:41.384010 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:41.390500 systemd[1]: sshd@16-172.232.20.218:22-139.178.89.65:35260.service: Deactivated successfully. Dec 16 13:19:41.395025 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:19:41.396921 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:19:41.399310 systemd-logind[1525]: Removed session 17. Dec 16 13:19:46.421165 kubelet[2710]: E1216 13:19:46.421015 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:19:46.423934 kubelet[2710]: E1216 13:19:46.422365 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:19:46.444921 systemd[1]: Started sshd@17-172.232.20.218:22-139.178.89.65:35274.service - OpenSSH per-connection server daemon (139.178.89.65:35274). Dec 16 13:19:46.784355 sshd[5034]: Accepted publickey for core from 139.178.89.65 port 35274 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:46.786380 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:46.793681 systemd-logind[1525]: New session 18 of user core. Dec 16 13:19:46.797903 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:19:47.130640 sshd[5037]: Connection closed by 139.178.89.65 port 35274 Dec 16 13:19:47.132797 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:47.137937 systemd[1]: sshd@17-172.232.20.218:22-139.178.89.65:35274.service: Deactivated successfully. Dec 16 13:19:47.140342 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:19:47.141616 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:19:47.144122 systemd-logind[1525]: Removed session 18. Dec 16 13:19:48.420077 kubelet[2710]: E1216 13:19:48.419874 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59" Dec 16 13:19:48.420828 kubelet[2710]: E1216 13:19:48.420802 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76bc8cc6dd-vx8fr" podUID="1fe93d8b-57a3-4524-abb4-58c7f5835720" Dec 16 13:19:48.421314 kubelet[2710]: E1216 13:19:48.421259 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dfzr8" podUID="5fcd65a8-90ec-479e-a0e4-707e3c32e3f8" Dec 16 13:19:50.419432 kubelet[2710]: E1216 13:19:50.419006 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:50.420401 kubelet[2710]: E1216 13:19:50.419554 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-tfn5z" podUID="12ede457-05ac-48b3-a0cb-fee957a57d7a" Dec 16 13:19:51.419613 kubelet[2710]: E1216 13:19:51.419218 2710 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" Dec 16 13:19:52.194761 systemd[1]: Started sshd@18-172.232.20.218:22-139.178.89.65:60210.service - OpenSSH per-connection server daemon (139.178.89.65:60210). Dec 16 13:19:52.541698 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 60210 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:52.545014 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:52.551775 systemd-logind[1525]: New session 19 of user core. Dec 16 13:19:52.557680 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:19:52.877696 sshd[5077]: Connection closed by 139.178.89.65 port 60210 Dec 16 13:19:52.878262 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:52.882942 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:19:52.884063 systemd[1]: sshd@18-172.232.20.218:22-139.178.89.65:60210.service: Deactivated successfully. Dec 16 13:19:52.887450 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:19:52.891538 systemd-logind[1525]: Removed session 19. Dec 16 13:19:57.423683 kubelet[2710]: E1216 13:19:57.423375 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4767f495-r8f85" podUID="43dfa291-6618-4cc2-b9da-24c903da3b7c" Dec 16 13:19:57.948989 systemd[1]: Started sshd@19-172.232.20.218:22-139.178.89.65:60214.service - OpenSSH per-connection server daemon (139.178.89.65:60214). Dec 16 13:19:58.330628 sshd[5089]: Accepted publickey for core from 139.178.89.65 port 60214 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:19:58.332952 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:58.338314 systemd-logind[1525]: New session 20 of user core. Dec 16 13:19:58.343705 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:19:58.670998 sshd[5092]: Connection closed by 139.178.89.65 port 60214 Dec 16 13:19:58.673659 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:58.677345 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:19:58.679237 systemd[1]: sshd@19-172.232.20.218:22-139.178.89.65:60214.service: Deactivated successfully. Dec 16 13:19:58.682354 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:19:58.684950 systemd-logind[1525]: Removed session 20. Dec 16 13:19:59.420233 kubelet[2710]: E1216 13:19:59.420141 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4xlch" podUID="ec01f64e-62ff-448c-858d-eb1dc0f9f12f" Dec 16 13:19:59.420233 kubelet[2710]: E1216 13:19:59.420208 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c5f65448b-76gbh" podUID="ece17f0c-d3a3-4fcd-aa90-acc8b03f2f59"