Dec 12 18:42:20.937316 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:42:20.937341 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:20.937350 kernel: BIOS-provided physical RAM map: Dec 12 18:42:20.937356 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:42:20.937362 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:42:20.937368 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:42:20.937376 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:42:20.937383 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:42:20.937388 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:42:20.937394 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:42:20.937400 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:42:20.937406 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:42:20.937412 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:42:20.937418 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:42:20.937427 kernel: NX (Execute Disable) protection: active Dec 12 18:42:20.937434 kernel: APIC: Static calls initialized Dec 12 18:42:20.937440 kernel: SMBIOS 2.8 present. Dec 12 18:42:20.937446 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:42:20.937452 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:42:20.937459 kernel: Hypervisor detected: KVM Dec 12 18:42:20.937467 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:42:20.937473 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:42:20.937479 kernel: kvm-clock: using sched offset of 7202201659 cycles Dec 12 18:42:20.937486 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:42:20.937493 kernel: tsc: Detected 2000.002 MHz processor Dec 12 18:42:20.937499 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:42:20.937506 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:42:20.937513 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:42:20.937519 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:42:20.937528 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:42:20.937535 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:42:20.937541 kernel: Using GB pages for direct mapping Dec 12 18:42:20.937547 kernel: ACPI: Early table checksum verification disabled Dec 12 18:42:20.937554 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:42:20.937560 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937567 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937573 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937579 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:42:20.937586 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937595 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937604 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937611 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:42:20.937618 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:42:20.937625 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:42:20.937633 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:42:20.937640 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:42:20.937647 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:42:20.937653 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:42:20.937660 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:42:20.937667 kernel: No NUMA configuration found Dec 12 18:42:20.937673 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:42:20.937680 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 12 18:42:20.937687 kernel: Zone ranges: Dec 12 18:42:20.937696 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:42:20.937702 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:42:20.937709 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:42:20.937924 kernel: Device empty Dec 12 18:42:20.937932 kernel: Movable zone start for each node Dec 12 18:42:20.937939 kernel: Early memory node ranges Dec 12 18:42:20.937945 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:42:20.937952 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:42:20.937959 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:42:20.937965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:42:20.937975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:42:20.937982 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:42:20.937988 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:42:20.937995 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:42:20.938002 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:42:20.938009 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:42:20.938016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:42:20.938022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:42:20.938029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:42:20.938038 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:42:20.938045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:42:20.938051 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:42:20.938058 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:42:20.938064 kernel: TSC deadline timer available Dec 12 18:42:20.938071 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:42:20.938078 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:42:20.938084 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:42:20.938091 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:42:20.938100 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:42:20.938107 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:42:20.938113 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:42:20.938120 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:42:20.938127 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:42:20.938133 kernel: kvm-guest: setup PV sched yield Dec 12 18:42:20.938140 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:42:20.938147 kernel: Booting paravirtualized kernel on KVM Dec 12 18:42:20.938153 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:42:20.938162 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:42:20.938169 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:42:20.938176 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:42:20.938182 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:42:20.938189 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:42:20.938196 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:42:20.938203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:20.938210 kernel: random: crng init done Dec 12 18:42:20.938219 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:42:20.938226 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:42:20.938232 kernel: Fallback order for Node 0: 0 Dec 12 18:42:20.938239 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:42:20.938246 kernel: Policy zone: Normal Dec 12 18:42:20.938253 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:42:20.938259 kernel: software IO TLB: area num 2. Dec 12 18:42:20.938266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:42:20.938272 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:42:20.938281 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:42:20.938288 kernel: Dynamic Preempt: voluntary Dec 12 18:42:20.938294 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:42:20.938302 kernel: rcu: RCU event tracing is enabled. Dec 12 18:42:20.938309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:42:20.938316 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:42:20.938322 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:42:20.938329 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:42:20.938336 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:42:20.938344 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:42:20.938351 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:20.938365 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:20.938374 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:42:20.938381 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:42:20.938388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:42:20.938395 kernel: Console: colour VGA+ 80x25 Dec 12 18:42:20.938401 kernel: printk: legacy console [tty0] enabled Dec 12 18:42:20.938408 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:42:20.938415 kernel: ACPI: Core revision 20240827 Dec 12 18:42:20.938425 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:42:20.938431 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:42:20.938438 kernel: x2apic enabled Dec 12 18:42:20.938445 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:42:20.938452 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:42:20.938459 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:42:20.938466 kernel: kvm-guest: setup PV IPIs Dec 12 18:42:20.938475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:42:20.938482 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Dec 12 18:42:20.938489 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Dec 12 18:42:20.938496 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:42:20.938503 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:42:20.938510 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:42:20.938517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:42:20.938524 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:42:20.938533 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:42:20.938540 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:42:20.938547 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:42:20.938554 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:42:20.938561 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:42:20.938569 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:42:20.938576 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:42:20.938583 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:42:20.938589 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:42:20.938598 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:42:20.938605 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:42:20.938612 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:42:20.938619 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:42:20.938626 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:42:20.938633 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:42:20.938640 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:42:20.938647 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:42:20.938656 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:42:20.938663 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:42:20.938670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:42:20.938677 kernel: landlock: Up and running. Dec 12 18:42:20.938683 kernel: SELinux: Initializing. Dec 12 18:42:20.938690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:42:20.938697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:42:20.938704 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:42:20.938711 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:42:20.938720 kernel: ... version: 0 Dec 12 18:42:20.938727 kernel: ... bit width: 48 Dec 12 18:42:20.938734 kernel: ... generic registers: 6 Dec 12 18:42:20.938741 kernel: ... value mask: 0000ffffffffffff Dec 12 18:42:20.938748 kernel: ... max period: 00007fffffffffff Dec 12 18:42:20.938755 kernel: ... fixed-purpose events: 0 Dec 12 18:42:20.938762 kernel: ... event mask: 000000000000003f Dec 12 18:42:20.938768 kernel: signal: max sigframe size: 3376 Dec 12 18:42:20.938775 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:42:20.938782 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:42:20.938792 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:42:20.938799 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:42:20.938805 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:42:20.938812 kernel: .... node #0, CPUs: #1 Dec 12 18:42:20.938819 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:42:20.938826 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:42:20.938833 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235488K reserved, 0K cma-reserved) Dec 12 18:42:20.938840 kernel: devtmpfs: initialized Dec 12 18:42:20.938847 kernel: x86/mm: Memory block size: 128MB Dec 12 18:42:20.938856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:42:20.938863 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:42:20.938870 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:42:20.938877 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:42:20.938884 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:42:20.938941 kernel: audit: type=2000 audit(1765564937.904:1): state=initialized audit_enabled=0 res=1 Dec 12 18:42:20.938949 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:42:20.938957 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:42:20.938963 kernel: cpuidle: using governor menu Dec 12 18:42:20.938973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:42:20.938980 kernel: dca service started, version 1.12.1 Dec 12 18:42:20.938987 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:42:20.938994 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:42:20.939001 kernel: PCI: Using configuration type 1 for base access Dec 12 18:42:20.939008 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:42:20.939015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:42:20.939022 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:42:20.939029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:42:20.939038 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:42:20.939046 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:42:20.939052 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:42:20.939059 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:42:20.939066 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:42:20.939073 kernel: ACPI: Interpreter enabled Dec 12 18:42:20.939080 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:42:20.939087 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:42:20.939094 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:42:20.939103 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:42:20.939110 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:42:20.939117 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:42:20.939298 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:42:20.939436 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:42:20.939560 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:42:20.939570 kernel: PCI host bridge to bus 0000:00 Dec 12 18:42:20.939698 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:42:20.939812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:42:20.941970 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:42:20.942096 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:42:20.942210 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:42:20.942321 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:42:20.942432 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:42:20.942578 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:42:20.942717 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:42:20.942842 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:42:20.942982 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:42:20.943105 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:42:20.943225 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:42:20.943359 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:42:20.943482 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:42:20.943603 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:42:20.943722 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:42:20.943850 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:42:20.943991 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:42:20.944113 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:42:20.944239 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:42:20.944358 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:42:20.944486 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:42:20.944774 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:42:20.945955 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:42:20.946094 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:42:20.946223 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:42:20.946352 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:42:20.946472 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:42:20.946483 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:42:20.946490 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:42:20.946497 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:42:20.946504 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:42:20.946511 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:42:20.946521 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:42:20.946528 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:42:20.946535 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:42:20.946542 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:42:20.946549 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:42:20.946556 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:42:20.946563 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:42:20.946570 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:42:20.946577 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:42:20.946586 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:42:20.946593 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:42:20.946600 kernel: iommu: Default domain type: Translated Dec 12 18:42:20.946607 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:42:20.946614 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:42:20.946621 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:42:20.946628 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:42:20.946635 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:42:20.946755 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:42:20.946881 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:42:20.947055 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:42:20.947067 kernel: vgaarb: loaded Dec 12 18:42:20.947075 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:42:20.947082 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:42:20.947089 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:42:20.947096 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:42:20.947104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:42:20.947114 kernel: pnp: PnP ACPI init Dec 12 18:42:20.947281 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:42:20.947293 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:42:20.947300 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:42:20.947308 kernel: NET: Registered PF_INET protocol family Dec 12 18:42:20.947315 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:42:20.947322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:42:20.947329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:42:20.947336 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:42:20.947347 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:42:20.947354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:42:20.947361 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:42:20.947368 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:42:20.947375 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:42:20.947382 kernel: NET: Registered PF_XDP protocol family Dec 12 18:42:20.947495 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:42:20.947606 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:42:20.947721 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:42:20.947831 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:42:20.947979 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:42:20.948093 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:42:20.948103 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:42:20.948110 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:42:20.948117 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:42:20.948124 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Dec 12 18:42:20.948132 kernel: Initialise system trusted keyrings Dec 12 18:42:20.948143 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:42:20.948150 kernel: Key type asymmetric registered Dec 12 18:42:20.948156 kernel: Asymmetric key parser 'x509' registered Dec 12 18:42:20.948163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:42:20.948170 kernel: io scheduler mq-deadline registered Dec 12 18:42:20.948177 kernel: io scheduler kyber registered Dec 12 18:42:20.948184 kernel: io scheduler bfq registered Dec 12 18:42:20.948192 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:42:20.948199 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:42:20.948209 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:42:20.948216 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:42:20.948223 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:42:20.948230 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:42:20.948237 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:42:20.948244 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:42:20.948376 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:42:20.948387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:42:20.948506 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:42:20.948621 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:42:20 UTC (1765564940) Dec 12 18:42:20.948735 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:42:20.948744 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:42:20.948751 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:42:20.948758 kernel: Segment Routing with IPv6 Dec 12 18:42:20.948766 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:42:20.948773 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:42:20.948780 kernel: Key type dns_resolver registered Dec 12 18:42:20.948790 kernel: IPI shorthand broadcast: enabled Dec 12 18:42:20.948797 kernel: sched_clock: Marking stable (3582003863, 334327512)->(4017770882, -101439507) Dec 12 18:42:20.948804 kernel: registered taskstats version 1 Dec 12 18:42:20.948811 kernel: Loading compiled-in X.509 certificates Dec 12 18:42:20.948818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:42:20.948825 kernel: Demotion targets for Node 0: null Dec 12 18:42:20.948832 kernel: Key type .fscrypt registered Dec 12 18:42:20.948839 kernel: Key type fscrypt-provisioning registered Dec 12 18:42:20.948846 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:42:20.948855 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:42:20.948862 kernel: ima: No architecture policies found Dec 12 18:42:20.948869 kernel: clk: Disabling unused clocks Dec 12 18:42:20.948876 kernel: Warning: unable to open an initial console. Dec 12 18:42:20.948884 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:42:20.948891 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:42:20.948928 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:42:20.948935 kernel: Run /init as init process Dec 12 18:42:20.948945 kernel: with arguments: Dec 12 18:42:20.948952 kernel: /init Dec 12 18:42:20.948959 kernel: with environment: Dec 12 18:42:20.948980 kernel: HOME=/ Dec 12 18:42:20.948989 kernel: TERM=linux Dec 12 18:42:20.948997 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:42:20.949008 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:42:20.949016 systemd[1]: Detected virtualization kvm. Dec 12 18:42:20.949026 systemd[1]: Detected architecture x86-64. Dec 12 18:42:20.949033 systemd[1]: Running in initrd. Dec 12 18:42:20.949041 systemd[1]: No hostname configured, using default hostname. Dec 12 18:42:20.949049 systemd[1]: Hostname set to . Dec 12 18:42:20.949057 systemd[1]: Initializing machine ID from random generator. Dec 12 18:42:20.949064 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:42:20.949074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:42:20.949082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:42:20.949093 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:42:20.949101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:42:20.949109 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:42:20.949117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:42:20.949126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:42:20.949134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:42:20.949142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:42:20.949152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:42:20.949160 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:42:20.949168 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:42:20.949175 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:42:20.949183 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:42:20.949191 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:42:20.949199 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:42:20.949207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:42:20.949217 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:42:20.949224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:42:20.949232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:42:20.949242 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:42:20.949251 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:42:20.949258 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:42:20.949268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:42:20.949276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:42:20.949284 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:42:20.949292 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:42:20.949300 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:42:20.949308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:42:20.949338 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:42:20.949359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:20.949367 systemd-journald[187]: Journal started Dec 12 18:42:20.949386 systemd-journald[187]: Runtime Journal (/run/log/journal/cc95550edefc40438f0d786adc837cfa) is 8M, max 78.2M, 70.2M free. Dec 12 18:42:20.954215 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:42:20.959070 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:42:20.960053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:42:20.962327 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:42:20.963248 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:42:20.970044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:42:20.981404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:42:21.099584 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:42:21.099616 kernel: Bridge firewalling registered Dec 12 18:42:21.010660 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:42:21.021629 systemd-tmpfiles[199]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:42:21.101168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:42:21.102230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:21.104014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:42:21.106199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:42:21.112137 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:42:21.115003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:42:21.121011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:42:21.136633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:42:21.137708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:21.141788 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:42:21.146001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:42:21.149365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:42:21.170248 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:42:21.201169 systemd-resolved[225]: Positive Trust Anchors: Dec 12 18:42:21.201182 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:42:21.201210 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:42:21.206066 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 12 18:42:21.210359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:42:21.211768 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:42:21.282953 kernel: SCSI subsystem initialized Dec 12 18:42:21.292935 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:42:21.306945 kernel: iscsi: registered transport (tcp) Dec 12 18:42:21.329196 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:42:21.329258 kernel: QLogic iSCSI HBA Driver Dec 12 18:42:21.356718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:42:21.372256 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:42:21.376481 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:42:21.429338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:42:21.431562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:42:21.483917 kernel: raid6: avx2x4 gen() 29715 MB/s Dec 12 18:42:21.501926 kernel: raid6: avx2x2 gen() 28630 MB/s Dec 12 18:42:21.520183 kernel: raid6: avx2x1 gen() 21344 MB/s Dec 12 18:42:21.520212 kernel: raid6: using algorithm avx2x4 gen() 29715 MB/s Dec 12 18:42:21.541183 kernel: raid6: .... xor() 4575 MB/s, rmw enabled Dec 12 18:42:21.541219 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:42:21.563920 kernel: xor: automatically using best checksumming function avx Dec 12 18:42:21.705927 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:42:21.713533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:42:21.716134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:42:21.745987 systemd-udevd[435]: Using default interface naming scheme 'v255'. Dec 12 18:42:21.752088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:42:21.757037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:42:21.786023 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Dec 12 18:42:21.816453 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:42:21.819136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:42:21.895980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:42:21.900152 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:42:21.968918 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:42:21.985303 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:42:22.000114 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:42:22.004926 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:42:22.004989 kernel: libata version 3.00 loaded. Dec 12 18:42:22.009955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:42:22.011079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:22.015911 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:22.020455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:22.026306 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:42:22.037516 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:42:22.214525 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:42:22.214765 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:42:22.223721 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:42:22.225023 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:42:22.225191 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:42:22.230657 kernel: AES CTR mode by8 optimization enabled Dec 12 18:42:22.230681 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 12 18:42:22.239944 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:42:22.250933 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 18:42:22.251177 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:42:22.251705 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:42:22.252923 kernel: scsi host1: ahci Dec 12 18:42:22.257915 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:42:22.257938 kernel: GPT:9289727 != 167739391 Dec 12 18:42:22.257950 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:42:22.257960 kernel: GPT:9289727 != 167739391 Dec 12 18:42:22.257969 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:42:22.257978 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:22.257988 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 18:42:22.259924 kernel: scsi host2: ahci Dec 12 18:42:22.263919 kernel: scsi host3: ahci Dec 12 18:42:22.264114 kernel: scsi host4: ahci Dec 12 18:42:22.285814 kernel: scsi host5: ahci Dec 12 18:42:22.289948 kernel: scsi host6: ahci Dec 12 18:42:22.290128 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Dec 12 18:42:22.290141 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Dec 12 18:42:22.290155 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Dec 12 18:42:22.290165 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Dec 12 18:42:22.290175 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Dec 12 18:42:22.290185 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Dec 12 18:42:22.350936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:42:22.471867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:22.486778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:42:22.495511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:42:22.502797 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:42:22.503612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:42:22.506987 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:42:22.528416 disk-uuid[592]: Primary Header is updated. Dec 12 18:42:22.528416 disk-uuid[592]: Secondary Entries is updated. Dec 12 18:42:22.528416 disk-uuid[592]: Secondary Header is updated. Dec 12 18:42:22.538206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:22.552916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:22.605366 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.605404 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.606684 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.609240 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.612063 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.616920 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:42:22.727539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:42:22.744179 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:42:22.745191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:42:22.747045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:42:22.751033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:42:22.789446 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:42:23.550929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:42:23.552086 disk-uuid[593]: The operation has completed successfully. Dec 12 18:42:23.615851 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:42:23.616048 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:42:23.649304 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:42:23.666750 sh[633]: Success Dec 12 18:42:23.689258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:42:23.689303 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:42:23.690312 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:42:23.702956 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:42:23.747337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:42:23.750975 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:42:23.762461 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:42:23.775926 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (645) Dec 12 18:42:23.780430 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:42:23.780497 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:23.793543 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:42:23.793590 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:42:23.793604 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:42:23.797690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:42:23.799724 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:42:23.801474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:42:23.803001 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:42:23.806015 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:42:23.839918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (678) Dec 12 18:42:23.846468 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:23.846492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:23.857063 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:23.857095 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:23.857113 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:23.864916 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:23.865579 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:42:23.869021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:42:23.958556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:42:23.982402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:42:24.030413 ignition[751]: Ignition 2.22.0 Dec 12 18:42:24.030426 ignition[751]: Stage: fetch-offline Dec 12 18:42:24.030455 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:24.030465 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:24.030540 ignition[751]: parsed url from cmdline: "" Dec 12 18:42:24.030544 ignition[751]: no config URL provided Dec 12 18:42:24.030549 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:42:24.036274 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:42:24.030557 ignition[751]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:42:24.030562 ignition[751]: failed to fetch config: resource requires networking Dec 12 18:42:24.031844 ignition[751]: Ignition finished successfully Dec 12 18:42:24.055780 systemd-networkd[818]: lo: Link UP Dec 12 18:42:24.055793 systemd-networkd[818]: lo: Gained carrier Dec 12 18:42:24.057399 systemd-networkd[818]: Enumeration completed Dec 12 18:42:24.057787 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:24.057792 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:42:24.057985 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:42:24.059922 systemd[1]: Reached target network.target - Network. Dec 12 18:42:24.060662 systemd-networkd[818]: eth0: Link UP Dec 12 18:42:24.060834 systemd-networkd[818]: eth0: Gained carrier Dec 12 18:42:24.060844 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:24.063217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:42:24.096626 ignition[823]: Ignition 2.22.0 Dec 12 18:42:24.096637 ignition[823]: Stage: fetch Dec 12 18:42:24.096762 ignition[823]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:24.096773 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:24.096852 ignition[823]: parsed url from cmdline: "" Dec 12 18:42:24.096856 ignition[823]: no config URL provided Dec 12 18:42:24.096861 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:42:24.096870 ignition[823]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:42:24.097924 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:42:24.098104 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:24.298794 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:42:24.298969 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:24.699308 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:42:24.699480 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:42:24.929974 systemd-networkd[818]: eth0: DHCPv4 address 172.239.194.183/24, gateway 172.239.194.1 acquired from 23.40.197.6 Dec 12 18:42:25.500244 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:42:25.598719 ignition[823]: PUT result: OK Dec 12 18:42:25.598776 ignition[823]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:42:25.705724 ignition[823]: GET result: OK Dec 12 18:42:25.706363 ignition[823]: parsing config with SHA512: e4406f2c80748d0da0b8cc6f4cdaf85992fee59fd10c6bbc8808bd98671d51353ea35fd92b7656729949945dc9c752569bce160e2e1cb029250694c434f608cd Dec 12 18:42:25.713770 unknown[823]: fetched base config from "system" Dec 12 18:42:25.713782 unknown[823]: fetched base config from "system" Dec 12 18:42:25.714030 ignition[823]: fetch: fetch complete Dec 12 18:42:25.713787 unknown[823]: fetched user config from "akamai" Dec 12 18:42:25.714035 ignition[823]: fetch: fetch passed Dec 12 18:42:25.720187 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:42:25.714076 ignition[823]: Ignition finished successfully Dec 12 18:42:25.741035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:42:25.770869 ignition[830]: Ignition 2.22.0 Dec 12 18:42:25.770887 ignition[830]: Stage: kargs Dec 12 18:42:25.771026 ignition[830]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:25.771036 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:25.771831 ignition[830]: kargs: kargs passed Dec 12 18:42:25.771870 ignition[830]: Ignition finished successfully Dec 12 18:42:25.775157 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:42:25.777715 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:42:25.805576 ignition[836]: Ignition 2.22.0 Dec 12 18:42:25.805593 ignition[836]: Stage: disks Dec 12 18:42:25.805711 ignition[836]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:25.805722 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:25.809175 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:42:25.806751 ignition[836]: disks: disks passed Dec 12 18:42:25.811161 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:42:25.806791 ignition[836]: Ignition finished successfully Dec 12 18:42:25.812262 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:42:25.813859 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:42:25.815572 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:42:25.817428 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:42:25.820367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:42:25.847973 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:42:25.850001 systemd-networkd[818]: eth0: Gained IPv6LL Dec 12 18:42:25.853176 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:42:25.856269 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:42:25.982922 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:42:25.983615 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:42:25.985156 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:42:25.988873 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:42:25.992970 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:42:25.995130 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:42:25.995192 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:42:25.995222 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:42:26.004883 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:42:26.007581 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:42:26.016927 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Dec 12 18:42:26.016959 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:26.021996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:26.028159 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:26.028182 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:26.032645 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:26.037321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:42:26.069262 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:42:26.075120 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:42:26.080381 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:42:26.085472 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:42:26.191764 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:42:26.194191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:42:26.198062 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:42:26.216286 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:42:26.217390 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:26.232393 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:42:26.248214 ignition[967]: INFO : Ignition 2.22.0 Dec 12 18:42:26.248214 ignition[967]: INFO : Stage: mount Dec 12 18:42:26.249802 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:26.249802 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:26.249802 ignition[967]: INFO : mount: mount passed Dec 12 18:42:26.249802 ignition[967]: INFO : Ignition finished successfully Dec 12 18:42:26.251869 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:42:26.256238 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:42:26.985152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:42:27.018981 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Dec 12 18:42:27.026538 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:42:27.026590 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:42:27.034539 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:42:27.034568 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:42:27.034582 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:42:27.039156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:42:27.076140 ignition[993]: INFO : Ignition 2.22.0 Dec 12 18:42:27.076140 ignition[993]: INFO : Stage: files Dec 12 18:42:27.078242 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:27.078242 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:27.078242 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:42:27.081183 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:42:27.081183 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:42:27.081183 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:42:27.081183 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:42:27.085045 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:42:27.085045 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:42:27.085045 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:42:27.081215 unknown[993]: wrote ssh authorized keys file for user: core Dec 12 18:42:27.418359 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:42:27.617500 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:42:27.617500 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:42:27.620395 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:42:27.651337 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:42:27.651337 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:27.651337 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:27.651337 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:27.651337 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:42:28.094022 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:42:28.337644 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:42:28.337644 ignition[993]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:42:28.340253 ignition[993]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:42:28.341802 ignition[993]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:42:28.344232 ignition[993]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:42:28.344232 ignition[993]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:42:28.344232 ignition[993]: INFO : files: files passed Dec 12 18:42:28.344232 ignition[993]: INFO : Ignition finished successfully Dec 12 18:42:28.345432 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:42:28.350036 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:42:28.360020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:42:28.364714 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:42:28.374068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:42:28.385035 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:28.385035 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:28.388484 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:42:28.391042 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:42:28.393193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:42:28.395448 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:42:28.449074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:42:28.449212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:42:28.451038 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:42:28.452637 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:42:28.454667 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:42:28.455628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:42:28.499546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:42:28.502398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:42:28.533659 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:42:28.534658 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:42:28.536361 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:42:28.538369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:42:28.538511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:42:28.540283 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:42:28.541365 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:42:28.543240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:42:28.545055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:42:28.546810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:42:28.548837 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:42:28.551028 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:42:28.553062 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:42:28.555013 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:42:28.556705 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:42:28.558367 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:42:28.559947 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:42:28.560086 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:42:28.562060 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:42:28.563308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:42:28.565144 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:42:28.566227 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:42:28.567217 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:42:28.567314 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:42:28.569606 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:42:28.569718 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:42:28.570763 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:42:28.570919 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:42:28.573976 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:42:28.578214 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:42:28.578327 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:42:28.586030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:42:28.587588 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:42:28.587701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:42:28.591446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:42:28.591551 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:42:28.611096 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:42:28.618824 ignition[1047]: INFO : Ignition 2.22.0 Dec 12 18:42:28.618824 ignition[1047]: INFO : Stage: umount Dec 12 18:42:28.618824 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:42:28.618824 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:42:28.618824 ignition[1047]: INFO : umount: umount passed Dec 12 18:42:28.618824 ignition[1047]: INFO : Ignition finished successfully Dec 12 18:42:28.611213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:42:28.622272 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:42:28.623048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:42:28.628015 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:42:28.628074 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:42:28.629001 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:42:28.629074 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:42:28.630291 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:42:28.630363 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:42:28.631983 systemd[1]: Stopped target network.target - Network. Dec 12 18:42:28.634452 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:42:28.634513 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:42:28.635418 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:42:28.636945 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:42:28.641000 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:42:28.642676 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:42:28.644331 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:42:28.646055 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:42:28.646107 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:42:28.649041 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:42:28.649090 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:42:28.650339 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:42:28.650393 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:42:28.653029 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:42:28.653078 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:42:28.655227 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:42:28.658246 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:42:28.662315 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:42:28.663486 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:42:28.663591 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:42:28.665273 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:42:28.665379 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:42:28.670120 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:42:28.670352 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:42:28.670469 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:42:28.673043 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:42:28.674722 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:42:28.676741 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:42:28.676987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:42:28.678622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:42:28.678679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:42:28.680838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:42:28.682343 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:42:28.682400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:42:28.685448 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:42:28.685496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:28.688566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:42:28.688617 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:42:28.690093 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:42:28.690145 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:42:28.692946 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:42:28.697497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:42:28.697582 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:42:28.713754 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:42:28.713877 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:42:28.715489 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:42:28.715846 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:42:28.718152 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:42:28.718225 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:42:28.719979 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:42:28.720020 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:42:28.721412 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:42:28.721463 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:42:28.724086 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:42:28.724134 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:42:28.726181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:42:28.726233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:42:28.729074 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:42:28.731222 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:42:28.731276 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:42:28.735098 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:42:28.735149 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:42:28.737001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:42:28.737051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:28.744389 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:42:28.744458 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:42:28.744508 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:42:28.756449 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:42:28.756566 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:42:28.759443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:42:28.761820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:42:28.782352 systemd[1]: Switching root. Dec 12 18:42:28.851811 systemd-journald[187]: Journal stopped Dec 12 18:42:30.257759 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:42:30.257825 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:42:30.257839 kernel: SELinux: policy capability open_perms=1 Dec 12 18:42:30.257850 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:42:30.257860 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:42:30.257873 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:42:30.257883 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:42:30.257907 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:42:30.257918 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:42:30.257929 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:42:30.257940 kernel: audit: type=1403 audit(1765564948.993:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:42:30.257953 systemd[1]: Successfully loaded SELinux policy in 77.231ms. Dec 12 18:42:30.257973 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.684ms. Dec 12 18:42:30.257993 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:42:30.258010 systemd[1]: Detected virtualization kvm. Dec 12 18:42:30.258022 systemd[1]: Detected architecture x86-64. Dec 12 18:42:30.258037 systemd[1]: Detected first boot. Dec 12 18:42:30.258049 systemd[1]: Initializing machine ID from random generator. Dec 12 18:42:30.258060 zram_generator::config[1091]: No configuration found. Dec 12 18:42:30.258072 kernel: Guest personality initialized and is inactive Dec 12 18:42:30.258083 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:42:30.258093 kernel: Initialized host personality Dec 12 18:42:30.258104 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:42:30.258114 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:42:30.258130 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:42:30.258142 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:42:30.258153 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:42:30.258165 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:42:30.258176 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:42:30.258187 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:42:30.258199 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:42:30.258213 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:42:30.258224 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:42:30.258235 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:42:30.258247 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:42:30.258258 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:42:30.258270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:42:30.258282 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:42:30.258296 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:42:30.258310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:42:30.258325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:42:30.258337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:42:30.258348 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:42:30.258360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:42:30.258372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:42:30.258383 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:42:30.258397 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:42:30.258408 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:42:30.258420 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:42:30.258431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:42:30.258443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:42:30.258455 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:42:30.258466 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:42:30.258478 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:42:30.258489 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:42:30.258503 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:42:30.258515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:42:30.258527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:42:30.258538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:42:30.258553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:42:30.258564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:42:30.258575 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:42:30.258586 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:42:30.258597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:30.258608 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:42:30.258619 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:42:30.258630 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:42:30.258644 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:42:30.258656 systemd[1]: Reached target machines.target - Containers. Dec 12 18:42:30.258667 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:42:30.258678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:42:30.258689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:42:30.258700 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:42:30.258711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:42:30.258722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:42:30.258734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:42:30.266889 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:42:30.266947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:42:30.266962 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:42:30.266974 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:42:30.266986 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:42:30.266999 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:42:30.267011 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:42:30.267024 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:42:30.267043 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:42:30.267055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:42:30.267067 kernel: fuse: init (API version 7.41) Dec 12 18:42:30.267079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:42:30.267091 kernel: loop: module loaded Dec 12 18:42:30.267102 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:42:30.267114 kernel: ACPI: bus type drm_connector registered Dec 12 18:42:30.267125 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:42:30.267139 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:42:30.267151 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:42:30.267194 systemd[1]: Stopped verity-setup.service. Dec 12 18:42:30.267207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:30.267219 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:42:30.267231 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:42:30.267243 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:42:30.267254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:42:30.267266 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:42:30.267281 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:42:30.267340 systemd-journald[1182]: Collecting audit messages is disabled. Dec 12 18:42:30.267367 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:42:30.267380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:42:30.267395 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:42:30.267407 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:42:30.267418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:42:30.267430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:42:30.267442 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:42:30.267454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:42:30.267466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:42:30.267477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:42:30.267492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:42:30.267504 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:42:30.267516 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:42:30.267527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:42:30.267539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:42:30.267556 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:42:30.267570 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:42:30.267584 systemd-journald[1182]: Journal started Dec 12 18:42:30.267609 systemd-journald[1182]: Runtime Journal (/run/log/journal/dcccb3681139463f93b6b169b5250016) is 8M, max 78.2M, 70.2M free. Dec 12 18:42:30.288016 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:42:30.288077 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:42:30.288095 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:42:29.704020 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:42:29.724086 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:42:29.724603 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:42:30.298538 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:42:30.298572 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:42:30.305399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:42:30.312923 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:42:30.321249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:42:30.327927 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:42:30.333912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:42:30.339980 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:42:30.350921 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:42:30.377090 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:42:30.394943 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:42:30.394463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:42:30.396994 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:42:30.398650 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:42:30.404188 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:42:30.414430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:42:30.446317 kernel: loop0: detected capacity change from 0 to 128560 Dec 12 18:42:30.445604 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:42:30.458216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:42:30.460549 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:42:30.465207 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:42:30.470407 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:42:30.472704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:42:30.478988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:42:30.507931 systemd-journald[1182]: Time spent on flushing to /var/log/journal/dcccb3681139463f93b6b169b5250016 is 59.767ms for 1014 entries. Dec 12 18:42:30.507931 systemd-journald[1182]: System Journal (/var/log/journal/dcccb3681139463f93b6b169b5250016) is 8M, max 195.6M, 187.6M free. Dec 12 18:42:30.579478 systemd-journald[1182]: Received client request to flush runtime journal. Dec 12 18:42:30.579521 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 18:42:30.514537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:42:30.517878 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:42:30.524195 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:42:30.539143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:42:30.583555 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:42:30.593927 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:42:30.631378 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 12 18:42:30.631398 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Dec 12 18:42:30.644769 kernel: loop3: detected capacity change from 0 to 8 Dec 12 18:42:30.643835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:42:30.672934 kernel: loop4: detected capacity change from 0 to 128560 Dec 12 18:42:30.707015 kernel: loop5: detected capacity change from 0 to 229808 Dec 12 18:42:30.740220 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:42:30.770049 kernel: loop7: detected capacity change from 0 to 8 Dec 12 18:42:30.773427 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:42:30.774504 (sd-merge)[1241]: Merged extensions into '/usr'. Dec 12 18:42:30.783401 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:42:30.783595 systemd[1]: Reloading... Dec 12 18:42:30.956993 zram_generator::config[1269]: No configuration found. Dec 12 18:42:31.052952 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:42:31.170273 systemd[1]: Reloading finished in 385 ms. Dec 12 18:42:31.203607 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:42:31.205292 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:42:31.206788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:42:31.226744 systemd[1]: Starting ensure-sysext.service... Dec 12 18:42:31.231055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:42:31.233693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:42:31.256011 systemd[1]: Reload requested from client PID 1311 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:42:31.256026 systemd[1]: Reloading... Dec 12 18:42:31.265633 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:42:31.265910 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:42:31.266264 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:42:31.266637 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:42:31.267740 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:42:31.273609 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Dec 12 18:42:31.273694 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Dec 12 18:42:31.278466 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Dec 12 18:42:31.286376 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:42:31.286388 systemd-tmpfiles[1312]: Skipping /boot Dec 12 18:42:31.307670 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:42:31.309017 systemd-tmpfiles[1312]: Skipping /boot Dec 12 18:42:31.348969 zram_generator::config[1336]: No configuration found. Dec 12 18:42:31.640926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:42:31.654919 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:42:31.662929 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:42:31.696766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:42:31.697371 systemd[1]: Reloading finished in 440 ms. Dec 12 18:42:31.708190 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:42:31.708432 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:42:31.710742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:42:31.713407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:42:31.743859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:42:31.746967 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:42:31.752183 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:42:31.784363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:42:31.790160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:42:31.793373 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:42:31.803840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.804439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:42:31.813255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:42:31.825000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:42:31.837396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:42:31.838397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:42:31.838515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:42:31.838603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.845991 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.846217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:42:31.846438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:42:31.846558 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:42:31.854038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:42:31.855957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.857072 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:42:31.858463 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:42:31.866245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:42:31.866490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:42:31.868766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:42:31.869058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:42:31.874311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:42:31.879050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:42:31.880518 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:42:31.881669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:42:31.892345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.894203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:42:31.897746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:42:31.903351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:42:31.911645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:42:31.918218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:42:31.921000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:42:31.921113 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:42:31.921230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:42:31.936237 systemd[1]: Finished ensure-sysext.service. Dec 12 18:42:31.946576 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:42:31.948820 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:42:31.959949 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:42:31.966704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:42:31.969465 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:42:31.977364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:42:31.979137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:42:31.981032 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:42:31.981706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:42:31.983400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:42:31.984286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:42:31.985424 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:42:31.986176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:42:31.997936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:42:31.998016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:42:32.006086 augenrules[1485]: No rules Dec 12 18:42:32.008323 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:42:32.008848 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:42:32.018052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:42:32.033174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:42:32.038185 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:42:32.089739 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:42:32.145055 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:42:32.272244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:42:32.311494 systemd-networkd[1435]: lo: Link UP Dec 12 18:42:32.311510 systemd-networkd[1435]: lo: Gained carrier Dec 12 18:42:32.313144 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:42:32.313318 systemd-networkd[1435]: Enumeration completed Dec 12 18:42:32.313758 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:32.313763 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:42:32.314245 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:42:32.314453 systemd-networkd[1435]: eth0: Link UP Dec 12 18:42:32.314641 systemd-networkd[1435]: eth0: Gained carrier Dec 12 18:42:32.314655 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:42:32.316137 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:42:32.319420 systemd-resolved[1436]: Positive Trust Anchors: Dec 12 18:42:32.319665 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:42:32.319696 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:42:32.320021 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:42:32.321868 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:42:32.326554 systemd-resolved[1436]: Defaulting to hostname 'linux'. Dec 12 18:42:32.328771 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:42:32.329779 systemd[1]: Reached target network.target - Network. Dec 12 18:42:32.330472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:42:32.331599 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:42:32.332645 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:42:32.334118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:42:32.334905 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:42:32.336031 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:42:32.337038 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:42:32.337839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:42:32.338620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:42:32.338836 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:42:32.339551 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:42:32.341921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:42:32.344551 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:42:32.347593 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:42:32.348539 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:42:32.349331 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:42:32.353388 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:42:32.354733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:42:32.356181 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:42:32.357818 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:42:32.358523 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:42:32.359266 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:42:32.359291 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:42:32.361060 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:42:32.364859 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:42:32.372006 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:42:32.376726 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:42:32.380016 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:42:32.382424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:42:32.386356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:42:32.395756 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:42:32.398585 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:42:32.408012 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:42:32.412306 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:42:32.417262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:42:32.418792 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 12 18:42:32.419933 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Dec 12 18:42:32.421787 jq[1517]: false Dec 12 18:42:32.423081 oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 12 18:42:32.423999 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Dec 12 18:42:32.423999 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:42:32.423999 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 12 18:42:32.423999 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 12 18:42:32.423999 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:42:32.423096 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:42:32.423134 oslogin_cache_refresh[1519]: Refreshing group entry cache Dec 12 18:42:32.423570 oslogin_cache_refresh[1519]: Failure getting groups, quitting Dec 12 18:42:32.423579 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:42:32.425861 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:42:32.429014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:42:32.429451 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:42:32.434101 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:42:32.440048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:42:32.445034 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:42:32.459519 coreos-metadata[1514]: Dec 12 18:42:32.459 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:42:32.461283 update_engine[1528]: I20251212 18:42:32.461217 1528 main.cc:92] Flatcar Update Engine starting Dec 12 18:42:32.464098 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:42:32.465339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:42:32.465550 extend-filesystems[1518]: Found /dev/sda6 Dec 12 18:42:32.466115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:42:32.466483 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:42:32.467934 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:42:32.471549 extend-filesystems[1518]: Found /dev/sda9 Dec 12 18:42:32.477433 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:42:32.479312 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:42:32.485315 extend-filesystems[1518]: Checking size of /dev/sda9 Dec 12 18:42:32.489791 jq[1529]: true Dec 12 18:42:32.491164 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:42:32.491459 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:42:32.510875 jq[1555]: true Dec 12 18:42:32.517155 extend-filesystems[1518]: Resized partition /dev/sda9 Dec 12 18:42:32.519997 tar[1542]: linux-amd64/LICENSE Dec 12 18:42:32.521040 tar[1542]: linux-amd64/helm Dec 12 18:42:32.522663 extend-filesystems[1565]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:42:32.525250 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:42:32.559603 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:42:32.576493 dbus-daemon[1515]: [system] SELinux support is enabled Dec 12 18:42:32.576688 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:42:32.580839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:42:32.580871 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:42:32.582527 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:42:32.582547 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:42:32.607060 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:42:32.609623 update_engine[1528]: I20251212 18:42:32.608464 1528 update_check_scheduler.cc:74] Next update check in 6m53s Dec 12 18:42:32.614182 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:42:32.675852 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:42:32.676704 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:42:32.680557 systemd-logind[1526]: New seat seat0. Dec 12 18:42:32.684766 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:42:32.706476 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:42:32.707224 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:42:32.713178 systemd[1]: Starting sshkeys.service... Dec 12 18:42:32.785960 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:42:32.789344 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:42:32.914555 coreos-metadata[1591]: Dec 12 18:42:32.913 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:42:32.940409 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:42:32.948964 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:42:32.973194 containerd[1559]: time="2025-12-12T18:42:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:42:32.973194 containerd[1559]: time="2025-12-12T18:42:32.972013218Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982208668Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.06µs" Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982251228Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982270888Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982473898Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982489308Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982515038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982575938Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:42:32.982667 containerd[1559]: time="2025-12-12T18:42:32.982587988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983087 containerd[1559]: time="2025-12-12T18:42:32.982864937Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983087 containerd[1559]: time="2025-12-12T18:42:32.982885787Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983087 containerd[1559]: time="2025-12-12T18:42:32.982948477Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983087 containerd[1559]: time="2025-12-12T18:42:32.982958417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983087 containerd[1559]: time="2025-12-12T18:42:32.983048617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983409 containerd[1559]: time="2025-12-12T18:42:32.983283957Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983409 containerd[1559]: time="2025-12-12T18:42:32.983320247Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:42:32.983409 containerd[1559]: time="2025-12-12T18:42:32.983330317Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:42:32.983409 containerd[1559]: time="2025-12-12T18:42:32.983373647Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:42:32.983763 containerd[1559]: time="2025-12-12T18:42:32.983735596Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:42:32.983834 containerd[1559]: time="2025-12-12T18:42:32.983811396Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:42:32.994229 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:42:32.998374 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:42:33.000677 containerd[1559]: time="2025-12-12T18:42:33.000637540Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:42:33.000825 containerd[1559]: time="2025-12-12T18:42:33.000777849Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:42:33.000825 containerd[1559]: time="2025-12-12T18:42:33.000809419Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:42:33.000825 containerd[1559]: time="2025-12-12T18:42:33.000823849Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:42:33.000906 containerd[1559]: time="2025-12-12T18:42:33.000837369Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:42:33.000906 containerd[1559]: time="2025-12-12T18:42:33.000847589Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:42:33.000906 containerd[1559]: time="2025-12-12T18:42:33.000858369Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:42:33.000906 containerd[1559]: time="2025-12-12T18:42:33.000869649Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:42:33.000906 containerd[1559]: time="2025-12-12T18:42:33.000882409Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:42:33.000987 containerd[1559]: time="2025-12-12T18:42:33.000928019Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:42:33.000987 containerd[1559]: time="2025-12-12T18:42:33.000942139Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:42:33.000987 containerd[1559]: time="2025-12-12T18:42:33.000954799Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001083399Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001118699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001135229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001145719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001156339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001166799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001177709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001188389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001199589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001214809Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001225639Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001272679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001286029Z" level=info msg="Start snapshots syncer" Dec 12 18:42:33.001356 containerd[1559]: time="2025-12-12T18:42:33.001309929Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:42:33.001766 containerd[1559]: time="2025-12-12T18:42:33.001542489Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:42:33.001766 containerd[1559]: time="2025-12-12T18:42:33.001636039Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001674409Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001775358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001795138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001804898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001814758Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001826228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001835768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001844668Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001864358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001874178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:42:33.001880 containerd[1559]: time="2025-12-12T18:42:33.001884798Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001937838Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001953088Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001961028Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001972258Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001979668Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.001993768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002009438Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002026328Z" level=info msg="runtime interface created" Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002031568Z" level=info msg="created NRI interface" Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002039018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002049388Z" level=info msg="Connect containerd service" Dec 12 18:42:33.002081 containerd[1559]: time="2025-12-12T18:42:33.002066068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:42:33.005396 containerd[1559]: time="2025-12-12T18:42:33.005283025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:42:33.023325 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:42:33.025268 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:42:33.033982 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:42:33.042122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:42:33.060607 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:42:33.060607 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:42:33.060607 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:42:33.060046 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:42:33.069991 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Dec 12 18:42:33.060292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:42:33.065007 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:42:33.074993 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:42:33.078457 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:42:33.080272 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:42:33.127877 containerd[1559]: time="2025-12-12T18:42:33.127835432Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:42:33.128258 containerd[1559]: time="2025-12-12T18:42:33.128150082Z" level=info msg="Start subscribing containerd event" Dec 12 18:42:33.128258 containerd[1559]: time="2025-12-12T18:42:33.128212142Z" level=info msg="Start recovering state" Dec 12 18:42:33.128411 containerd[1559]: time="2025-12-12T18:42:33.128378702Z" level=info msg="Start event monitor" Dec 12 18:42:33.128480 containerd[1559]: time="2025-12-12T18:42:33.128450542Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:42:33.128526 containerd[1559]: time="2025-12-12T18:42:33.128515482Z" level=info msg="Start streaming server" Dec 12 18:42:33.128645 containerd[1559]: time="2025-12-12T18:42:33.128573832Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:42:33.128645 containerd[1559]: time="2025-12-12T18:42:33.128583402Z" level=info msg="runtime interface starting up..." Dec 12 18:42:33.128645 containerd[1559]: time="2025-12-12T18:42:33.128589592Z" level=info msg="starting plugins..." Dec 12 18:42:33.128645 containerd[1559]: time="2025-12-12T18:42:33.128605412Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:42:33.129397 containerd[1559]: time="2025-12-12T18:42:33.129374331Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:42:33.130778 containerd[1559]: time="2025-12-12T18:42:33.130761069Z" level=info msg="containerd successfully booted in 0.159819s" Dec 12 18:42:33.130884 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:42:33.189966 systemd-networkd[1435]: eth0: DHCPv4 address 172.239.194.183/24, gateway 172.239.194.1 acquired from 23.40.197.6 Dec 12 18:42:33.190046 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1435 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:42:33.192307 systemd-timesyncd[1469]: Network configuration changed, trying to establish connection. Dec 12 18:42:33.196444 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:42:33.224950 tar[1542]: linux-amd64/README.md Dec 12 18:42:33.246533 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:42:33.281532 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:42:33.281907 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:42:33.282338 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1634 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:42:33.288166 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:42:34.570881 systemd-resolved[1436]: Clock change detected. Flushing caches. Dec 12 18:42:34.571342 systemd-timesyncd[1469]: Contacted time server 52.21.95.127:123 (0.flatcar.pool.ntp.org). Dec 12 18:42:34.571398 systemd-timesyncd[1469]: Initial clock synchronization to Fri 2025-12-12 18:42:34.569445 UTC. Dec 12 18:42:34.601616 polkitd[1638]: Started polkitd version 126 Dec 12 18:42:34.606229 polkitd[1638]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:42:34.606479 polkitd[1638]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:42:34.606527 polkitd[1638]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:42:34.606714 polkitd[1638]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:42:34.606740 polkitd[1638]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:42:34.606775 polkitd[1638]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:42:34.607494 polkitd[1638]: Finished loading, compiling and executing 2 rules Dec 12 18:42:34.607709 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:42:34.608435 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:42:34.609729 polkitd[1638]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:42:34.620080 systemd-hostnamed[1634]: Hostname set to <172-239-194-183> (transient) Dec 12 18:42:34.620100 systemd-resolved[1436]: System hostname changed to '172-239-194-183'. Dec 12 18:42:34.697057 coreos-metadata[1514]: Dec 12 18:42:34.697 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:42:34.791599 coreos-metadata[1514]: Dec 12 18:42:34.791 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:42:34.829306 systemd-networkd[1435]: eth0: Gained IPv6LL Dec 12 18:42:34.833056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:42:34.835084 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:42:34.839425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:34.843513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:42:34.884310 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:42:34.977851 coreos-metadata[1514]: Dec 12 18:42:34.977 INFO Fetch successful Dec 12 18:42:34.977851 coreos-metadata[1514]: Dec 12 18:42:34.977 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:42:35.173431 coreos-metadata[1591]: Dec 12 18:42:35.173 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:42:35.239175 coreos-metadata[1514]: Dec 12 18:42:35.239 INFO Fetch successful Dec 12 18:42:35.267432 coreos-metadata[1591]: Dec 12 18:42:35.267 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:42:35.403570 coreos-metadata[1591]: Dec 12 18:42:35.403 INFO Fetch successful Dec 12 18:42:35.408682 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:42:35.410787 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:42:35.480143 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:42:35.476670 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:42:35.484355 systemd[1]: Finished sshkeys.service. Dec 12 18:42:36.362977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:36.365991 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:42:36.422165 systemd[1]: Startup finished in 3.679s (kernel) + 8.297s (initrd) + 6.268s (userspace) = 18.246s. Dec 12 18:42:36.429584 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:42:36.612041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:42:36.614298 systemd[1]: Started sshd@0-172.239.194.183:22-139.178.68.195:36208.service - OpenSSH per-connection server daemon (139.178.68.195:36208). Dec 12 18:42:36.919422 kubelet[1691]: E1212 18:42:36.919321 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:42:36.922530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:42:36.922713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:42:36.923273 systemd[1]: kubelet.service: Consumed 1.508s CPU time, 268.3M memory peak. Dec 12 18:42:36.966346 sshd[1701]: Accepted publickey for core from 139.178.68.195 port 36208 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:36.968496 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:36.975580 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:42:36.977853 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:42:36.984935 systemd-logind[1526]: New session 1 of user core. Dec 12 18:42:36.995682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:42:36.999634 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:42:37.011032 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:42:37.013726 systemd-logind[1526]: New session c1 of user core. Dec 12 18:42:37.154442 systemd[1708]: Queued start job for default target default.target. Dec 12 18:42:37.162659 systemd[1708]: Created slice app.slice - User Application Slice. Dec 12 18:42:37.162684 systemd[1708]: Reached target paths.target - Paths. Dec 12 18:42:37.162726 systemd[1708]: Reached target timers.target - Timers. Dec 12 18:42:37.164298 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:42:37.175820 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:42:37.175929 systemd[1708]: Reached target sockets.target - Sockets. Dec 12 18:42:37.175967 systemd[1708]: Reached target basic.target - Basic System. Dec 12 18:42:37.176013 systemd[1708]: Reached target default.target - Main User Target. Dec 12 18:42:37.176056 systemd[1708]: Startup finished in 156ms. Dec 12 18:42:37.176573 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:42:37.184249 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:42:37.467039 systemd[1]: Started sshd@1-172.239.194.183:22-139.178.68.195:36218.service - OpenSSH per-connection server daemon (139.178.68.195:36218). Dec 12 18:42:37.801668 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 36218 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:37.803141 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:37.807172 systemd-logind[1526]: New session 2 of user core. Dec 12 18:42:37.814221 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:42:38.047921 sshd[1722]: Connection closed by 139.178.68.195 port 36218 Dec 12 18:42:38.048729 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:38.052879 systemd[1]: sshd@1-172.239.194.183:22-139.178.68.195:36218.service: Deactivated successfully. Dec 12 18:42:38.056857 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:42:38.058789 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:42:38.059822 systemd-logind[1526]: Removed session 2. Dec 12 18:42:38.110634 systemd[1]: Started sshd@2-172.239.194.183:22-139.178.68.195:36224.service - OpenSSH per-connection server daemon (139.178.68.195:36224). Dec 12 18:42:38.461066 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 36224 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:38.462662 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:38.468359 systemd-logind[1526]: New session 3 of user core. Dec 12 18:42:38.481950 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:42:38.711165 sshd[1731]: Connection closed by 139.178.68.195 port 36224 Dec 12 18:42:38.711888 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:38.715726 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:42:38.716365 systemd[1]: sshd@2-172.239.194.183:22-139.178.68.195:36224.service: Deactivated successfully. Dec 12 18:42:38.718041 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:42:38.720017 systemd-logind[1526]: Removed session 3. Dec 12 18:42:38.774275 systemd[1]: Started sshd@3-172.239.194.183:22-139.178.68.195:36228.service - OpenSSH per-connection server daemon (139.178.68.195:36228). Dec 12 18:42:39.113436 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 36228 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:39.115572 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:39.122171 systemd-logind[1526]: New session 4 of user core. Dec 12 18:42:39.135257 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:42:39.365159 sshd[1740]: Connection closed by 139.178.68.195 port 36228 Dec 12 18:42:39.366054 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:39.370787 systemd[1]: sshd@3-172.239.194.183:22-139.178.68.195:36228.service: Deactivated successfully. Dec 12 18:42:39.373529 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:42:39.374965 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:42:39.376753 systemd-logind[1526]: Removed session 4. Dec 12 18:42:39.426444 systemd[1]: Started sshd@4-172.239.194.183:22-139.178.68.195:36240.service - OpenSSH per-connection server daemon (139.178.68.195:36240). Dec 12 18:42:39.769677 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 36240 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:39.770971 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:39.776983 systemd-logind[1526]: New session 5 of user core. Dec 12 18:42:39.782270 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:42:39.972232 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:42:39.972671 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:42:39.991372 sudo[1750]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:40.042156 sshd[1749]: Connection closed by 139.178.68.195 port 36240 Dec 12 18:42:40.043092 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:40.047216 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:42:40.047826 systemd[1]: sshd@4-172.239.194.183:22-139.178.68.195:36240.service: Deactivated successfully. Dec 12 18:42:40.049538 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:42:40.051476 systemd-logind[1526]: Removed session 5. Dec 12 18:42:40.103525 systemd[1]: Started sshd@5-172.239.194.183:22-139.178.68.195:36248.service - OpenSSH per-connection server daemon (139.178.68.195:36248). Dec 12 18:42:40.445506 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 36248 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:40.447374 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:40.452487 systemd-logind[1526]: New session 6 of user core. Dec 12 18:42:40.457256 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:42:40.644154 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:42:40.644485 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:42:40.649400 sudo[1761]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:40.658912 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:42:40.659244 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:42:40.669369 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:42:40.706357 augenrules[1783]: No rules Dec 12 18:42:40.706970 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:42:40.707238 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:42:40.708038 sudo[1760]: pam_unix(sudo:session): session closed for user root Dec 12 18:42:40.758516 sshd[1759]: Connection closed by 139.178.68.195 port 36248 Dec 12 18:42:40.758936 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Dec 12 18:42:40.763013 systemd[1]: sshd@5-172.239.194.183:22-139.178.68.195:36248.service: Deactivated successfully. Dec 12 18:42:40.764583 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:42:40.765281 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:42:40.766975 systemd-logind[1526]: Removed session 6. Dec 12 18:42:40.825743 systemd[1]: Started sshd@6-172.239.194.183:22-139.178.68.195:59560.service - OpenSSH per-connection server daemon (139.178.68.195:59560). Dec 12 18:42:41.175617 sshd[1792]: Accepted publickey for core from 139.178.68.195 port 59560 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:42:41.177571 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:42:41.183905 systemd-logind[1526]: New session 7 of user core. Dec 12 18:42:41.189403 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:42:41.379569 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:42:41.379899 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:42:41.664673 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:42:41.681512 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:42:41.898519 dockerd[1814]: time="2025-12-12T18:42:41.898455202Z" level=info msg="Starting up" Dec 12 18:42:41.899267 dockerd[1814]: time="2025-12-12T18:42:41.899245491Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:42:41.911966 dockerd[1814]: time="2025-12-12T18:42:41.911887058Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:42:41.956772 dockerd[1814]: time="2025-12-12T18:42:41.955775145Z" level=info msg="Loading containers: start." Dec 12 18:42:41.968299 kernel: Initializing XFRM netlink socket Dec 12 18:42:42.249489 systemd-networkd[1435]: docker0: Link UP Dec 12 18:42:42.252947 dockerd[1814]: time="2025-12-12T18:42:42.252906987Z" level=info msg="Loading containers: done." Dec 12 18:42:42.269167 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4122192946-merged.mount: Deactivated successfully. Dec 12 18:42:42.270460 dockerd[1814]: time="2025-12-12T18:42:42.270372040Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:42:42.270460 dockerd[1814]: time="2025-12-12T18:42:42.270446400Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:42:42.270554 dockerd[1814]: time="2025-12-12T18:42:42.270523800Z" level=info msg="Initializing buildkit" Dec 12 18:42:42.291562 dockerd[1814]: time="2025-12-12T18:42:42.291524839Z" level=info msg="Completed buildkit initialization" Dec 12 18:42:42.298624 dockerd[1814]: time="2025-12-12T18:42:42.298582482Z" level=info msg="Daemon has completed initialization" Dec 12 18:42:42.298751 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:42:42.299086 dockerd[1814]: time="2025-12-12T18:42:42.299058081Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:42:43.284715 containerd[1559]: time="2025-12-12T18:42:43.284675396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:42:44.016303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188851058.mount: Deactivated successfully. Dec 12 18:42:45.281257 containerd[1559]: time="2025-12-12T18:42:45.281171649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:45.282328 containerd[1559]: time="2025-12-12T18:42:45.282283438Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 12 18:42:45.283136 containerd[1559]: time="2025-12-12T18:42:45.282847717Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:45.287125 containerd[1559]: time="2025-12-12T18:42:45.285693795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:45.287125 containerd[1559]: time="2025-12-12T18:42:45.287013603Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.002301707s" Dec 12 18:42:45.287125 containerd[1559]: time="2025-12-12T18:42:45.287047423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:42:45.287894 containerd[1559]: time="2025-12-12T18:42:45.287842252Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:42:46.830650 containerd[1559]: time="2025-12-12T18:42:46.830605730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:46.831804 containerd[1559]: time="2025-12-12T18:42:46.831540149Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 12 18:42:46.832364 containerd[1559]: time="2025-12-12T18:42:46.832337678Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:46.835307 containerd[1559]: time="2025-12-12T18:42:46.835281655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:46.836155 containerd[1559]: time="2025-12-12T18:42:46.836127284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.548053692s" Dec 12 18:42:46.836214 containerd[1559]: time="2025-12-12T18:42:46.836156554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:42:46.836571 containerd[1559]: time="2025-12-12T18:42:46.836546694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:42:46.965392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:42:46.967823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:47.140329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:47.147693 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:42:47.194902 kubelet[2095]: E1212 18:42:47.194833 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:42:47.201882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:42:47.202085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:42:47.202892 systemd[1]: kubelet.service: Consumed 198ms CPU time, 110.3M memory peak. Dec 12 18:42:48.157000 containerd[1559]: time="2025-12-12T18:42:48.156909953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:48.157959 containerd[1559]: time="2025-12-12T18:42:48.157930482Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 12 18:42:48.158746 containerd[1559]: time="2025-12-12T18:42:48.158708112Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:48.161191 containerd[1559]: time="2025-12-12T18:42:48.161161179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:48.161966 containerd[1559]: time="2025-12-12T18:42:48.161937968Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.325364964s" Dec 12 18:42:48.162004 containerd[1559]: time="2025-12-12T18:42:48.161967518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:42:48.162517 containerd[1559]: time="2025-12-12T18:42:48.162502048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:42:49.368588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502593915.mount: Deactivated successfully. Dec 12 18:42:49.741476 containerd[1559]: time="2025-12-12T18:42:49.741440109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:49.742314 containerd[1559]: time="2025-12-12T18:42:49.742058668Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:42:49.742860 containerd[1559]: time="2025-12-12T18:42:49.742829697Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:49.744269 containerd[1559]: time="2025-12-12T18:42:49.744237396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:49.744779 containerd[1559]: time="2025-12-12T18:42:49.744752196Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.582229468s" Dec 12 18:42:49.744849 containerd[1559]: time="2025-12-12T18:42:49.744835045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:42:49.745442 containerd[1559]: time="2025-12-12T18:42:49.745413335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:42:50.407331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545275074.mount: Deactivated successfully. Dec 12 18:42:51.112912 containerd[1559]: time="2025-12-12T18:42:51.112800117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:51.114007 containerd[1559]: time="2025-12-12T18:42:51.113986016Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 12 18:42:51.114903 containerd[1559]: time="2025-12-12T18:42:51.114862745Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:51.116908 containerd[1559]: time="2025-12-12T18:42:51.116876443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:51.117820 containerd[1559]: time="2025-12-12T18:42:51.117792852Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.372345697s" Dec 12 18:42:51.117862 containerd[1559]: time="2025-12-12T18:42:51.117822742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:42:51.118380 containerd[1559]: time="2025-12-12T18:42:51.118289432Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:42:51.696615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010394426.mount: Deactivated successfully. Dec 12 18:42:51.699515 containerd[1559]: time="2025-12-12T18:42:51.699487771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:51.700067 containerd[1559]: time="2025-12-12T18:42:51.700043940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:42:51.701546 containerd[1559]: time="2025-12-12T18:42:51.700571680Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:51.702044 containerd[1559]: time="2025-12-12T18:42:51.702019468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:42:51.702835 containerd[1559]: time="2025-12-12T18:42:51.702807747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 584.490025ms" Dec 12 18:42:51.702920 containerd[1559]: time="2025-12-12T18:42:51.702905887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:42:51.703381 containerd[1559]: time="2025-12-12T18:42:51.703359627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:42:52.341573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604274842.mount: Deactivated successfully. Dec 12 18:42:53.903477 containerd[1559]: time="2025-12-12T18:42:53.903414677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:53.905773 containerd[1559]: time="2025-12-12T18:42:53.905472945Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 12 18:42:53.906609 containerd[1559]: time="2025-12-12T18:42:53.906579764Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:53.909796 containerd[1559]: time="2025-12-12T18:42:53.909755140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:42:53.912602 containerd[1559]: time="2025-12-12T18:42:53.912572718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.209186361s" Dec 12 18:42:53.912633 containerd[1559]: time="2025-12-12T18:42:53.912604048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:42:56.371885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:56.372032 systemd[1]: kubelet.service: Consumed 198ms CPU time, 110.3M memory peak. Dec 12 18:42:56.374567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:56.404529 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-7.scope)... Dec 12 18:42:56.404614 systemd[1]: Reloading... Dec 12 18:42:56.553295 zram_generator::config[2296]: No configuration found. Dec 12 18:42:56.781688 systemd[1]: Reloading finished in 376 ms. Dec 12 18:42:56.837871 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:42:56.838025 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:42:56.838484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:56.838542 systemd[1]: kubelet.service: Consumed 151ms CPU time, 98.3M memory peak. Dec 12 18:42:56.840670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:42:57.025707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:42:57.034902 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:42:57.077276 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:57.077276 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:42:57.077276 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:42:57.077276 kubelet[2350]: I1212 18:42:57.076540 2350 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:42:57.448043 kubelet[2350]: I1212 18:42:57.447925 2350 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:42:57.448043 kubelet[2350]: I1212 18:42:57.447952 2350 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:42:57.448221 kubelet[2350]: I1212 18:42:57.448202 2350 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:42:57.479133 kubelet[2350]: E1212 18:42:57.478538 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.194.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.194.183:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:42:57.479133 kubelet[2350]: I1212 18:42:57.478686 2350 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:42:57.486621 kubelet[2350]: I1212 18:42:57.486594 2350 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:42:57.490966 kubelet[2350]: I1212 18:42:57.490944 2350 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:42:57.491401 kubelet[2350]: I1212 18:42:57.491366 2350 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:42:57.491537 kubelet[2350]: I1212 18:42:57.491395 2350 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-194-183","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:42:57.491537 kubelet[2350]: I1212 18:42:57.491535 2350 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:42:57.491675 kubelet[2350]: I1212 18:42:57.491543 2350 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:42:57.492331 kubelet[2350]: I1212 18:42:57.492308 2350 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:57.494998 kubelet[2350]: I1212 18:42:57.494973 2350 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:42:57.494998 kubelet[2350]: I1212 18:42:57.494998 2350 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:42:57.495064 kubelet[2350]: I1212 18:42:57.495024 2350 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:42:57.495064 kubelet[2350]: I1212 18:42:57.495038 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:42:57.502993 kubelet[2350]: E1212 18:42:57.502971 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.194.183:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-194-183&limit=500&resourceVersion=0\": dial tcp 172.239.194.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:42:57.503250 kubelet[2350]: I1212 18:42:57.503235 2350 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:42:57.503758 kubelet[2350]: I1212 18:42:57.503744 2350 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:42:57.504832 kubelet[2350]: W1212 18:42:57.504819 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:42:57.506542 kubelet[2350]: E1212 18:42:57.506516 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.194.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.194.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:42:57.508016 kubelet[2350]: I1212 18:42:57.507991 2350 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:42:57.508054 kubelet[2350]: I1212 18:42:57.508041 2350 server.go:1289] "Started kubelet" Dec 12 18:42:57.509179 kubelet[2350]: I1212 18:42:57.509145 2350 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:42:57.510119 kubelet[2350]: I1212 18:42:57.510069 2350 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:42:57.511447 kubelet[2350]: I1212 18:42:57.511399 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:42:57.511984 kubelet[2350]: I1212 18:42:57.511970 2350 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:42:57.513042 kubelet[2350]: E1212 18:42:57.512128 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.194.183:6443/api/v1/namespaces/default/events\": dial tcp 172.239.194.183:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-194-183.18808bfe203c500a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-194-183,UID:172-239-194-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-194-183,},FirstTimestamp:2025-12-12 18:42:57.508012042 +0000 UTC m=+0.468705182,LastTimestamp:2025-12-12 18:42:57.508012042 +0000 UTC m=+0.468705182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-194-183,}" Dec 12 18:42:57.515520 kubelet[2350]: I1212 18:42:57.515506 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:42:57.515897 kubelet[2350]: I1212 18:42:57.515883 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:42:57.518155 kubelet[2350]: E1212 18:42:57.518096 2350 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:42:57.519296 kubelet[2350]: E1212 18:42:57.519281 2350 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-194-183\" not found" Dec 12 18:42:57.519485 kubelet[2350]: I1212 18:42:57.519473 2350 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:42:57.519848 kubelet[2350]: I1212 18:42:57.519834 2350 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:42:57.519954 kubelet[2350]: I1212 18:42:57.519944 2350 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:42:57.520841 kubelet[2350]: E1212 18:42:57.520546 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.194.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.194.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:42:57.522015 kubelet[2350]: E1212 18:42:57.521993 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.194.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-194-183?timeout=10s\": dial tcp 172.239.194.183:6443: connect: connection refused" interval="200ms" Dec 12 18:42:57.523454 kubelet[2350]: I1212 18:42:57.523439 2350 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:42:57.523517 kubelet[2350]: I1212 18:42:57.523508 2350 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:42:57.523626 kubelet[2350]: I1212 18:42:57.523610 2350 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:42:57.543573 kubelet[2350]: I1212 18:42:57.543543 2350 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:42:57.543573 kubelet[2350]: I1212 18:42:57.543560 2350 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:42:57.543573 kubelet[2350]: I1212 18:42:57.543574 2350 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:42:57.546927 kubelet[2350]: I1212 18:42:57.546175 2350 policy_none.go:49] "None policy: Start" Dec 12 18:42:57.546927 kubelet[2350]: I1212 18:42:57.546191 2350 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:42:57.546927 kubelet[2350]: I1212 18:42:57.546201 2350 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:42:57.548983 kubelet[2350]: I1212 18:42:57.548953 2350 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:42:57.550768 kubelet[2350]: I1212 18:42:57.550753 2350 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:42:57.550826 kubelet[2350]: I1212 18:42:57.550817 2350 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:42:57.550877 kubelet[2350]: I1212 18:42:57.550867 2350 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:42:57.550915 kubelet[2350]: I1212 18:42:57.550907 2350 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:42:57.551004 kubelet[2350]: E1212 18:42:57.550989 2350 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:42:57.560587 kubelet[2350]: E1212 18:42:57.558523 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.194.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.194.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:42:57.563875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:42:57.576884 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:42:57.580135 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:42:57.588352 kubelet[2350]: E1212 18:42:57.588332 2350 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:42:57.588960 kubelet[2350]: I1212 18:42:57.588945 2350 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:42:57.589064 kubelet[2350]: I1212 18:42:57.589026 2350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:42:57.589588 kubelet[2350]: I1212 18:42:57.589511 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:42:57.590743 kubelet[2350]: E1212 18:42:57.590624 2350 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:42:57.590947 kubelet[2350]: E1212 18:42:57.590932 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-194-183\" not found" Dec 12 18:42:57.663191 systemd[1]: Created slice kubepods-burstable-pod8d8217be1a537db018041eb8bf43c044.slice - libcontainer container kubepods-burstable-pod8d8217be1a537db018041eb8bf43c044.slice. Dec 12 18:42:57.670864 kubelet[2350]: E1212 18:42:57.670829 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:57.677640 systemd[1]: Created slice kubepods-burstable-pod4654f41f88186e5774cc8a30f3c0f223.slice - libcontainer container kubepods-burstable-pod4654f41f88186e5774cc8a30f3c0f223.slice. Dec 12 18:42:57.685486 kubelet[2350]: E1212 18:42:57.685245 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:57.688420 systemd[1]: Created slice kubepods-burstable-podd78da07913a5a31a96819377f6c44b4e.slice - libcontainer container kubepods-burstable-podd78da07913a5a31a96819377f6c44b4e.slice. Dec 12 18:42:57.690155 kubelet[2350]: E1212 18:42:57.689980 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:57.690890 kubelet[2350]: I1212 18:42:57.690876 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-194-183" Dec 12 18:42:57.691179 kubelet[2350]: E1212 18:42:57.691161 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.194.183:6443/api/v1/nodes\": dial tcp 172.239.194.183:6443: connect: connection refused" node="172-239-194-183" Dec 12 18:42:57.721615 kubelet[2350]: I1212 18:42:57.721592 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-ca-certs\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:42:57.721868 kubelet[2350]: I1212 18:42:57.721623 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-k8s-certs\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:42:57.721868 kubelet[2350]: I1212 18:42:57.721645 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:42:57.721868 kubelet[2350]: I1212 18:42:57.721686 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:42:57.721868 kubelet[2350]: I1212 18:42:57.721712 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-flexvolume-dir\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:42:57.721868 kubelet[2350]: I1212 18:42:57.721736 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-kubeconfig\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:42:57.721991 kubelet[2350]: I1212 18:42:57.721756 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d78da07913a5a31a96819377f6c44b4e-kubeconfig\") pod \"kube-scheduler-172-239-194-183\" (UID: \"d78da07913a5a31a96819377f6c44b4e\") " pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:42:57.721991 kubelet[2350]: I1212 18:42:57.721774 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-ca-certs\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:42:57.721991 kubelet[2350]: I1212 18:42:57.721806 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-k8s-certs\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:42:57.722842 kubelet[2350]: E1212 18:42:57.722808 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.194.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-194-183?timeout=10s\": dial tcp 172.239.194.183:6443: connect: connection refused" interval="400ms" Dec 12 18:42:57.893153 kubelet[2350]: I1212 18:42:57.893129 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-194-183" Dec 12 18:42:57.893637 kubelet[2350]: E1212 18:42:57.893603 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.194.183:6443/api/v1/nodes\": dial tcp 172.239.194.183:6443: connect: connection refused" node="172-239-194-183" Dec 12 18:42:57.971777 kubelet[2350]: E1212 18:42:57.971695 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:57.972644 containerd[1559]: time="2025-12-12T18:42:57.972186108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-194-183,Uid:8d8217be1a537db018041eb8bf43c044,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:57.986253 kubelet[2350]: E1212 18:42:57.986232 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:57.986908 containerd[1559]: time="2025-12-12T18:42:57.986868553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-194-183,Uid:4654f41f88186e5774cc8a30f3c0f223,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:57.990724 containerd[1559]: time="2025-12-12T18:42:57.990233950Z" level=info msg="connecting to shim 9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb" address="unix:///run/containerd/s/96d1b9417831f7a2275674acb2fd43fa116e609dbb8a23922b602ddf6a029f96" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:57.991065 kubelet[2350]: E1212 18:42:57.990913 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:57.991402 containerd[1559]: time="2025-12-12T18:42:57.991370589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-194-183,Uid:d78da07913a5a31a96819377f6c44b4e,Namespace:kube-system,Attempt:0,}" Dec 12 18:42:58.015193 containerd[1559]: time="2025-12-12T18:42:58.015162505Z" level=info msg="connecting to shim 97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a" address="unix:///run/containerd/s/c19c89bd11ad936c9be969970b2aa10650254d2f7b7cafd235d7432351fed2d7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:58.023087 containerd[1559]: time="2025-12-12T18:42:58.023009707Z" level=info msg="connecting to shim 18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44" address="unix:///run/containerd/s/665ccc7af4592743a9b18deed14248fa578354c26b3acd1458fc73b373c34e0d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:42:58.031245 systemd[1]: Started cri-containerd-9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb.scope - libcontainer container 9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb. Dec 12 18:42:58.071658 systemd[1]: Started cri-containerd-18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44.scope - libcontainer container 18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44. Dec 12 18:42:58.081682 systemd[1]: Started cri-containerd-97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a.scope - libcontainer container 97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a. Dec 12 18:42:58.120212 containerd[1559]: time="2025-12-12T18:42:58.120052670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-194-183,Uid:8d8217be1a537db018041eb8bf43c044,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb\"" Dec 12 18:42:58.122581 kubelet[2350]: E1212 18:42:58.122562 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:58.124743 kubelet[2350]: E1212 18:42:58.124634 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.194.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-194-183?timeout=10s\": dial tcp 172.239.194.183:6443: connect: connection refused" interval="800ms" Dec 12 18:42:58.138864 containerd[1559]: time="2025-12-12T18:42:58.138168052Z" level=info msg="CreateContainer within sandbox \"9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:42:58.175229 containerd[1559]: time="2025-12-12T18:42:58.175195405Z" level=info msg="Container a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:58.182791 containerd[1559]: time="2025-12-12T18:42:58.182763037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-194-183,Uid:4654f41f88186e5774cc8a30f3c0f223,Namespace:kube-system,Attempt:0,} returns sandbox id \"97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a\"" Dec 12 18:42:58.183940 kubelet[2350]: E1212 18:42:58.183920 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:58.208409 containerd[1559]: time="2025-12-12T18:42:58.208378562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-194-183,Uid:d78da07913a5a31a96819377f6c44b4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44\"" Dec 12 18:42:58.210446 kubelet[2350]: E1212 18:42:58.210427 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:58.213527 containerd[1559]: time="2025-12-12T18:42:58.213498677Z" level=info msg="CreateContainer within sandbox \"97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:42:58.216728 containerd[1559]: time="2025-12-12T18:42:58.216702844Z" level=info msg="CreateContainer within sandbox \"9e4cafeeb2caf0c0851023c4c67f542df58eee90294b51202285b9e717fa51fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1\"" Dec 12 18:42:58.223504 containerd[1559]: time="2025-12-12T18:42:58.223334337Z" level=info msg="StartContainer for \"a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1\"" Dec 12 18:42:58.227674 containerd[1559]: time="2025-12-12T18:42:58.224965835Z" level=info msg="connecting to shim a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1" address="unix:///run/containerd/s/96d1b9417831f7a2275674acb2fd43fa116e609dbb8a23922b602ddf6a029f96" protocol=ttrpc version=3 Dec 12 18:42:58.239036 containerd[1559]: time="2025-12-12T18:42:58.236871353Z" level=info msg="CreateContainer within sandbox \"18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:42:58.248159 containerd[1559]: time="2025-12-12T18:42:58.248123902Z" level=info msg="Container f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:58.253350 containerd[1559]: time="2025-12-12T18:42:58.253316257Z" level=info msg="Container 30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:42:58.258479 containerd[1559]: time="2025-12-12T18:42:58.258181922Z" level=info msg="CreateContainer within sandbox \"97eeedff2519e4f1862f0a01ae16dd4c158162f9ce92d1d385d3e54d130fc79a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c\"" Dec 12 18:42:58.259794 containerd[1559]: time="2025-12-12T18:42:58.259742730Z" level=info msg="StartContainer for \"f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c\"" Dec 12 18:42:58.262981 containerd[1559]: time="2025-12-12T18:42:58.262820917Z" level=info msg="CreateContainer within sandbox \"18aa78f4a2b5c1dfbe7dd6e0354e52bcdb83e7798525b7704d60667e6ca4ed44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4\"" Dec 12 18:42:58.263155 containerd[1559]: time="2025-12-12T18:42:58.263032297Z" level=info msg="StartContainer for \"30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4\"" Dec 12 18:42:58.263155 containerd[1559]: time="2025-12-12T18:42:58.263071347Z" level=info msg="connecting to shim f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c" address="unix:///run/containerd/s/c19c89bd11ad936c9be969970b2aa10650254d2f7b7cafd235d7432351fed2d7" protocol=ttrpc version=3 Dec 12 18:42:58.264121 containerd[1559]: time="2025-12-12T18:42:58.263807506Z" level=info msg="connecting to shim 30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4" address="unix:///run/containerd/s/665ccc7af4592743a9b18deed14248fa578354c26b3acd1458fc73b373c34e0d" protocol=ttrpc version=3 Dec 12 18:42:58.265209 systemd[1]: Started cri-containerd-a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1.scope - libcontainer container a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1. Dec 12 18:42:58.296381 kubelet[2350]: I1212 18:42:58.296360 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-194-183" Dec 12 18:42:58.296865 kubelet[2350]: E1212 18:42:58.296847 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.194.183:6443/api/v1/nodes\": dial tcp 172.239.194.183:6443: connect: connection refused" node="172-239-194-183" Dec 12 18:42:58.297233 systemd[1]: Started cri-containerd-f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c.scope - libcontainer container f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c. Dec 12 18:42:58.302068 systemd[1]: Started cri-containerd-30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4.scope - libcontainer container 30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4. Dec 12 18:42:58.362330 containerd[1559]: time="2025-12-12T18:42:58.362230608Z" level=info msg="StartContainer for \"a1a6a4b919dc3ad95d83f05c0226dcbeb60297cec78cc70bd2e7d2fdbf8806c1\" returns successfully" Dec 12 18:42:58.367380 containerd[1559]: time="2025-12-12T18:42:58.367350203Z" level=info msg="StartContainer for \"30702bc9798d685f288286d4c8c6f4aa7fedbc970ac5cd19401cac66b09c9ad4\" returns successfully" Dec 12 18:42:58.406524 containerd[1559]: time="2025-12-12T18:42:58.406448974Z" level=info msg="StartContainer for \"f35b10033beabe0c99a8d0819f54dd75bcd55882c3030cdd4a51f21ebac8593c\" returns successfully" Dec 12 18:42:58.567236 kubelet[2350]: E1212 18:42:58.567152 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:58.567321 kubelet[2350]: E1212 18:42:58.567262 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:58.569155 kubelet[2350]: E1212 18:42:58.569055 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:58.569777 kubelet[2350]: E1212 18:42:58.569756 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:58.574256 kubelet[2350]: E1212 18:42:58.574232 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:58.574360 kubelet[2350]: E1212 18:42:58.574342 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:59.101066 kubelet[2350]: I1212 18:42:59.100949 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-194-183" Dec 12 18:42:59.575840 kubelet[2350]: E1212 18:42:59.575602 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:59.576533 kubelet[2350]: E1212 18:42:59.575805 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:59.576533 kubelet[2350]: E1212 18:42:59.576485 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:59.576681 kubelet[2350]: E1212 18:42:59.576646 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:42:59.742772 kubelet[2350]: E1212 18:42:59.742736 2350 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-194-183\" not found" node="172-239-194-183" Dec 12 18:42:59.921503 kubelet[2350]: I1212 18:42:59.920833 2350 kubelet_node_status.go:78] "Successfully registered node" node="172-239-194-183" Dec 12 18:42:59.921503 kubelet[2350]: E1212 18:42:59.921417 2350 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-239-194-183\": node \"172-239-194-183\" not found" Dec 12 18:43:00.021654 kubelet[2350]: I1212 18:43:00.021585 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:00.030028 kubelet[2350]: E1212 18:43:00.029994 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-194-183\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:00.030028 kubelet[2350]: I1212 18:43:00.030017 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:00.031845 kubelet[2350]: E1212 18:43:00.031824 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-194-183\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:00.031845 kubelet[2350]: I1212 18:43:00.031841 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:00.033501 kubelet[2350]: E1212 18:43:00.033479 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-194-183\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:00.508424 kubelet[2350]: I1212 18:43:00.508365 2350 apiserver.go:52] "Watching apiserver" Dec 12 18:43:00.520935 kubelet[2350]: I1212 18:43:00.520887 2350 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:43:00.575842 kubelet[2350]: I1212 18:43:00.575813 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:00.578166 kubelet[2350]: E1212 18:43:00.578100 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-194-183\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:00.578809 kubelet[2350]: E1212 18:43:00.578470 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:01.993716 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Dec 12 18:43:01.993739 systemd[1]: Reloading... Dec 12 18:43:02.131574 zram_generator::config[2671]: No configuration found. Dec 12 18:43:02.449774 systemd[1]: Reloading finished in 455 ms. Dec 12 18:43:02.481260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:02.502011 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:43:02.503548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:02.503702 systemd[1]: kubelet.service: Consumed 867ms CPU time, 131.5M memory peak. Dec 12 18:43:02.511252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:43:02.757651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:43:02.773734 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:43:02.825299 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:02.825299 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:43:02.825299 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:43:02.825890 kubelet[2724]: I1212 18:43:02.825361 2724 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:43:02.832139 kubelet[2724]: I1212 18:43:02.831493 2724 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:43:02.832139 kubelet[2724]: I1212 18:43:02.831513 2724 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:43:02.832139 kubelet[2724]: I1212 18:43:02.831699 2724 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:43:02.834396 kubelet[2724]: I1212 18:43:02.834346 2724 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:43:02.837301 kubelet[2724]: I1212 18:43:02.837268 2724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:43:02.843155 kubelet[2724]: I1212 18:43:02.842445 2724 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:43:02.848906 kubelet[2724]: I1212 18:43:02.848866 2724 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:43:02.851216 kubelet[2724]: I1212 18:43:02.851152 2724 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:43:02.851578 kubelet[2724]: I1212 18:43:02.851212 2724 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-194-183","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:43:02.851578 kubelet[2724]: I1212 18:43:02.851518 2724 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:43:02.851578 kubelet[2724]: I1212 18:43:02.851531 2724 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:43:02.851779 kubelet[2724]: I1212 18:43:02.851593 2724 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:02.853238 kubelet[2724]: I1212 18:43:02.851833 2724 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:43:02.853238 kubelet[2724]: I1212 18:43:02.851847 2724 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:43:02.853238 kubelet[2724]: I1212 18:43:02.851887 2724 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:43:02.853238 kubelet[2724]: I1212 18:43:02.851902 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:43:02.856464 kubelet[2724]: I1212 18:43:02.856429 2724 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:43:02.856895 kubelet[2724]: I1212 18:43:02.856868 2724 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:43:02.864186 kubelet[2724]: I1212 18:43:02.864157 2724 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:43:02.864340 kubelet[2724]: I1212 18:43:02.864204 2724 server.go:1289] "Started kubelet" Dec 12 18:43:02.868136 kubelet[2724]: I1212 18:43:02.867458 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:43:02.868136 kubelet[2724]: I1212 18:43:02.867817 2724 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:43:02.868136 kubelet[2724]: I1212 18:43:02.867871 2724 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:43:02.870583 kubelet[2724]: I1212 18:43:02.870554 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:43:02.872594 kubelet[2724]: I1212 18:43:02.872569 2724 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:43:02.883181 kubelet[2724]: I1212 18:43:02.882392 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:43:02.887909 kubelet[2724]: I1212 18:43:02.887867 2724 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:43:02.889363 kubelet[2724]: E1212 18:43:02.889331 2724 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-194-183\" not found" Dec 12 18:43:02.893300 kubelet[2724]: I1212 18:43:02.893266 2724 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:43:02.894161 kubelet[2724]: I1212 18:43:02.893425 2724 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:43:02.897984 kubelet[2724]: I1212 18:43:02.897937 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:43:02.901183 kubelet[2724]: I1212 18:43:02.901149 2724 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:43:02.901183 kubelet[2724]: I1212 18:43:02.901186 2724 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:43:02.901351 kubelet[2724]: I1212 18:43:02.901210 2724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:43:02.901351 kubelet[2724]: I1212 18:43:02.901219 2724 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:43:02.901351 kubelet[2724]: E1212 18:43:02.901271 2724 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:43:02.911150 kubelet[2724]: I1212 18:43:02.910761 2724 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:43:02.911150 kubelet[2724]: I1212 18:43:02.910864 2724 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:43:02.918283 kubelet[2724]: I1212 18:43:02.918247 2724 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:43:02.920144 kubelet[2724]: E1212 18:43:02.920075 2724 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979733 2724 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979750 2724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979766 2724 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979889 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979900 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979914 2724 policy_none.go:49] "None policy: Start" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979923 2724 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.979934 2724 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:43:02.980896 kubelet[2724]: I1212 18:43:02.980009 2724 state_mem.go:75] "Updated machine memory state" Dec 12 18:43:02.985273 kubelet[2724]: E1212 18:43:02.985255 2724 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:43:02.987936 kubelet[2724]: I1212 18:43:02.987921 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:43:02.988197 kubelet[2724]: I1212 18:43:02.988165 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:43:02.990547 kubelet[2724]: I1212 18:43:02.990521 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:43:02.996280 kubelet[2724]: E1212 18:43:02.996247 2724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:43:03.004030 kubelet[2724]: I1212 18:43:03.003550 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:03.004627 kubelet[2724]: I1212 18:43:03.004597 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.004993 kubelet[2724]: I1212 18:43:03.004908 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:03.091641 kubelet[2724]: I1212 18:43:03.091525 2724 kubelet_node_status.go:75] "Attempting to register node" node="172-239-194-183" Dec 12 18:43:03.096980 kubelet[2724]: I1212 18:43:03.096901 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-k8s-certs\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:03.097711 kubelet[2724]: I1212 18:43:03.097429 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:03.097711 kubelet[2724]: I1212 18:43:03.097594 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-flexvolume-dir\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.098452 kubelet[2724]: I1212 18:43:03.098385 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d78da07913a5a31a96819377f6c44b4e-kubeconfig\") pod \"kube-scheduler-172-239-194-183\" (UID: \"d78da07913a5a31a96819377f6c44b4e\") " pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:03.098452 kubelet[2724]: I1212 18:43:03.098409 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d8217be1a537db018041eb8bf43c044-ca-certs\") pod \"kube-apiserver-172-239-194-183\" (UID: \"8d8217be1a537db018041eb8bf43c044\") " pod="kube-system/kube-apiserver-172-239-194-183" Dec 12 18:43:03.098452 kubelet[2724]: I1212 18:43:03.098423 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-ca-certs\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.098746 kubelet[2724]: I1212 18:43:03.098436 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-k8s-certs\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.098746 kubelet[2724]: I1212 18:43:03.098630 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-kubeconfig\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.099051 kubelet[2724]: I1212 18:43:03.098902 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654f41f88186e5774cc8a30f3c0f223-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-194-183\" (UID: \"4654f41f88186e5774cc8a30f3c0f223\") " pod="kube-system/kube-controller-manager-172-239-194-183" Dec 12 18:43:03.106831 kubelet[2724]: I1212 18:43:03.106732 2724 kubelet_node_status.go:124] "Node was previously registered" node="172-239-194-183" Dec 12 18:43:03.107513 kubelet[2724]: I1212 18:43:03.107033 2724 kubelet_node_status.go:78] "Successfully registered node" node="172-239-194-183" Dec 12 18:43:03.315724 kubelet[2724]: E1212 18:43:03.315661 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.317012 kubelet[2724]: E1212 18:43:03.316995 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.320402 kubelet[2724]: E1212 18:43:03.320340 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.867318 kubelet[2724]: I1212 18:43:03.857929 2724 apiserver.go:52] "Watching apiserver" Dec 12 18:43:03.893910 kubelet[2724]: I1212 18:43:03.893406 2724 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:43:03.953799 kubelet[2724]: E1212 18:43:03.953728 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.954073 kubelet[2724]: I1212 18:43:03.953950 2724 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:03.954525 kubelet[2724]: E1212 18:43:03.954491 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.964212 kubelet[2724]: E1212 18:43:03.963639 2724 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-194-183\" already exists" pod="kube-system/kube-scheduler-172-239-194-183" Dec 12 18:43:03.964212 kubelet[2724]: E1212 18:43:03.963874 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:03.991859 kubelet[2724]: I1212 18:43:03.991771 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-194-183" podStartSLOduration=0.991754698 podStartE2EDuration="991.754698ms" podCreationTimestamp="2025-12-12 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:03.991448579 +0000 UTC m=+1.207775404" watchObservedRunningTime="2025-12-12 18:43:03.991754698 +0000 UTC m=+1.208081523" Dec 12 18:43:04.004002 kubelet[2724]: I1212 18:43:04.003460 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-194-183" podStartSLOduration=1.003443417 podStartE2EDuration="1.003443417s" podCreationTimestamp="2025-12-12 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:04.001348139 +0000 UTC m=+1.217674954" watchObservedRunningTime="2025-12-12 18:43:04.003443417 +0000 UTC m=+1.219770212" Dec 12 18:43:04.010379 kubelet[2724]: I1212 18:43:04.010260 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-194-183" podStartSLOduration=1.01024499 podStartE2EDuration="1.01024499s" podCreationTimestamp="2025-12-12 18:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:04.01017896 +0000 UTC m=+1.226505755" watchObservedRunningTime="2025-12-12 18:43:04.01024499 +0000 UTC m=+1.226571805" Dec 12 18:43:04.636802 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:43:04.955154 kubelet[2724]: E1212 18:43:04.955018 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:04.955625 kubelet[2724]: E1212 18:43:04.955456 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:05.959675 kubelet[2724]: E1212 18:43:05.959644 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:07.271415 kubelet[2724]: I1212 18:43:07.271370 2724 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:43:07.271873 containerd[1559]: time="2025-12-12T18:43:07.271821065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:43:07.272146 kubelet[2724]: I1212 18:43:07.272073 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:43:08.230410 systemd[1]: Created slice kubepods-besteffort-podefc80d34_e70d_42e5_9136_b14f217c51bf.slice - libcontainer container kubepods-besteffort-podefc80d34_e70d_42e5_9136_b14f217c51bf.slice. Dec 12 18:43:08.328924 kubelet[2724]: I1212 18:43:08.328854 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/efc80d34-e70d-42e5-9136-b14f217c51bf-kube-proxy\") pod \"kube-proxy-vf4kf\" (UID: \"efc80d34-e70d-42e5-9136-b14f217c51bf\") " pod="kube-system/kube-proxy-vf4kf" Dec 12 18:43:08.328924 kubelet[2724]: I1212 18:43:08.328913 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efc80d34-e70d-42e5-9136-b14f217c51bf-xtables-lock\") pod \"kube-proxy-vf4kf\" (UID: \"efc80d34-e70d-42e5-9136-b14f217c51bf\") " pod="kube-system/kube-proxy-vf4kf" Dec 12 18:43:08.328924 kubelet[2724]: I1212 18:43:08.328940 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxgm\" (UniqueName: \"kubernetes.io/projected/efc80d34-e70d-42e5-9136-b14f217c51bf-kube-api-access-gdxgm\") pod \"kube-proxy-vf4kf\" (UID: \"efc80d34-e70d-42e5-9136-b14f217c51bf\") " pod="kube-system/kube-proxy-vf4kf" Dec 12 18:43:08.328924 kubelet[2724]: I1212 18:43:08.328966 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efc80d34-e70d-42e5-9136-b14f217c51bf-lib-modules\") pod \"kube-proxy-vf4kf\" (UID: \"efc80d34-e70d-42e5-9136-b14f217c51bf\") " pod="kube-system/kube-proxy-vf4kf" Dec 12 18:43:08.517757 systemd[1]: Created slice kubepods-besteffort-podfb3fef33_ecbd_4f53_94fe_231f0a5f8685.slice - libcontainer container kubepods-besteffort-podfb3fef33_ecbd_4f53_94fe_231f0a5f8685.slice. Dec 12 18:43:08.529902 kubelet[2724]: I1212 18:43:08.529862 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb3fef33-ecbd-4f53-94fe-231f0a5f8685-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gtbp4\" (UID: \"fb3fef33-ecbd-4f53-94fe-231f0a5f8685\") " pod="tigera-operator/tigera-operator-7dcd859c48-gtbp4" Dec 12 18:43:08.529902 kubelet[2724]: I1212 18:43:08.529904 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjt2f\" (UniqueName: \"kubernetes.io/projected/fb3fef33-ecbd-4f53-94fe-231f0a5f8685-kube-api-access-sjt2f\") pod \"tigera-operator-7dcd859c48-gtbp4\" (UID: \"fb3fef33-ecbd-4f53-94fe-231f0a5f8685\") " pod="tigera-operator/tigera-operator-7dcd859c48-gtbp4" Dec 12 18:43:08.540093 kubelet[2724]: E1212 18:43:08.540059 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:08.540776 containerd[1559]: time="2025-12-12T18:43:08.540747246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf4kf,Uid:efc80d34-e70d-42e5-9136-b14f217c51bf,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:08.563814 containerd[1559]: time="2025-12-12T18:43:08.563767946Z" level=info msg="connecting to shim 6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160" address="unix:///run/containerd/s/e1e6070af6c16fc6da38569280f3d11cfc713428caebf5e0aa335f1141a4543f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:08.590248 systemd[1]: Started cri-containerd-6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160.scope - libcontainer container 6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160. Dec 12 18:43:08.620271 containerd[1559]: time="2025-12-12T18:43:08.620230898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf4kf,Uid:efc80d34-e70d-42e5-9136-b14f217c51bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160\"" Dec 12 18:43:08.621659 kubelet[2724]: E1212 18:43:08.621155 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:08.625665 containerd[1559]: time="2025-12-12T18:43:08.625639892Z" level=info msg="CreateContainer within sandbox \"6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:43:08.638610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619466225.mount: Deactivated successfully. Dec 12 18:43:08.638876 containerd[1559]: time="2025-12-12T18:43:08.638713292Z" level=info msg="Container e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:08.649135 containerd[1559]: time="2025-12-12T18:43:08.649066107Z" level=info msg="CreateContainer within sandbox \"6ebc5c32ee0c78d7f182aa90ecc7fa32e77051e39e88078efbb2c85b45062160\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44\"" Dec 12 18:43:08.650028 containerd[1559]: time="2025-12-12T18:43:08.649967237Z" level=info msg="StartContainer for \"e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44\"" Dec 12 18:43:08.652124 containerd[1559]: time="2025-12-12T18:43:08.652057500Z" level=info msg="connecting to shim e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44" address="unix:///run/containerd/s/e1e6070af6c16fc6da38569280f3d11cfc713428caebf5e0aa335f1141a4543f" protocol=ttrpc version=3 Dec 12 18:43:08.674300 systemd[1]: Started cri-containerd-e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44.scope - libcontainer container e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44. Dec 12 18:43:08.743605 containerd[1559]: time="2025-12-12T18:43:08.743509820Z" level=info msg="StartContainer for \"e263b6080b9e3d8396f9df307152cca5c5288a48c75e1d3d938854cd6c4b2b44\" returns successfully" Dec 12 18:43:08.821212 kubelet[2724]: E1212 18:43:08.820970 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:08.824711 containerd[1559]: time="2025-12-12T18:43:08.824157381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gtbp4,Uid:fb3fef33-ecbd-4f53-94fe-231f0a5f8685,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:43:08.850865 containerd[1559]: time="2025-12-12T18:43:08.850313851Z" level=info msg="connecting to shim 06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557" address="unix:///run/containerd/s/52e3df67f9b03370c9f63b73bad39a422f731576dccf05af2bc304c3041b0eb3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:08.885542 systemd[1]: Started cri-containerd-06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557.scope - libcontainer container 06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557. Dec 12 18:43:08.964065 containerd[1559]: time="2025-12-12T18:43:08.963994821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gtbp4,Uid:fb3fef33-ecbd-4f53-94fe-231f0a5f8685,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557\"" Dec 12 18:43:08.966572 containerd[1559]: time="2025-12-12T18:43:08.966546256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:43:08.970155 kubelet[2724]: E1212 18:43:08.970068 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:08.970443 kubelet[2724]: E1212 18:43:08.970384 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:08.986960 kubelet[2724]: I1212 18:43:08.986901 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vf4kf" podStartSLOduration=0.986890206 podStartE2EDuration="986.890206ms" podCreationTimestamp="2025-12-12 18:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:08.979160814 +0000 UTC m=+6.195487619" watchObservedRunningTime="2025-12-12 18:43:08.986890206 +0000 UTC m=+6.203217011" Dec 12 18:43:09.397601 kubelet[2724]: E1212 18:43:09.397454 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:09.970976 kubelet[2724]: E1212 18:43:09.970910 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:10.023580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958454934.mount: Deactivated successfully. Dec 12 18:43:10.796995 containerd[1559]: time="2025-12-12T18:43:10.796924714Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:10.797933 containerd[1559]: time="2025-12-12T18:43:10.797815231Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:43:10.798464 containerd[1559]: time="2025-12-12T18:43:10.798439122Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:10.799958 containerd[1559]: time="2025-12-12T18:43:10.799932669Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:10.800641 containerd[1559]: time="2025-12-12T18:43:10.800616756Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.834042336s" Dec 12 18:43:10.800686 containerd[1559]: time="2025-12-12T18:43:10.800643928Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:43:10.803726 containerd[1559]: time="2025-12-12T18:43:10.803688847Z" level=info msg="CreateContainer within sandbox \"06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:43:10.812141 containerd[1559]: time="2025-12-12T18:43:10.811483540Z" level=info msg="Container c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:10.814913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690847509.mount: Deactivated successfully. Dec 12 18:43:10.828722 containerd[1559]: time="2025-12-12T18:43:10.828687467Z" level=info msg="CreateContainer within sandbox \"06e32550a1f6bda60c5fd4109030657238c2a30edd980097136c897a6cc57557\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b\"" Dec 12 18:43:10.829204 containerd[1559]: time="2025-12-12T18:43:10.829167923Z" level=info msg="StartContainer for \"c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b\"" Dec 12 18:43:10.829789 containerd[1559]: time="2025-12-12T18:43:10.829769993Z" level=info msg="connecting to shim c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b" address="unix:///run/containerd/s/52e3df67f9b03370c9f63b73bad39a422f731576dccf05af2bc304c3041b0eb3" protocol=ttrpc version=3 Dec 12 18:43:10.853270 systemd[1]: Started cri-containerd-c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b.scope - libcontainer container c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b. Dec 12 18:43:10.882038 containerd[1559]: time="2025-12-12T18:43:10.881967818Z" level=info msg="StartContainer for \"c1adddf7c9ef0c2d3a042755fc5ab4338eb01e97c133208b3cc60fbecda2169b\" returns successfully" Dec 12 18:43:10.975912 kubelet[2724]: E1212 18:43:10.975884 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:10.983078 kubelet[2724]: I1212 18:43:10.983039 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gtbp4" podStartSLOduration=1.147750744 podStartE2EDuration="2.983028291s" podCreationTimestamp="2025-12-12 18:43:08 +0000 UTC" firstStartedPulling="2025-12-12 18:43:08.965906944 +0000 UTC m=+6.182233739" lastFinishedPulling="2025-12-12 18:43:10.801184491 +0000 UTC m=+8.017511286" observedRunningTime="2025-12-12 18:43:10.982831601 +0000 UTC m=+8.199158396" watchObservedRunningTime="2025-12-12 18:43:10.983028291 +0000 UTC m=+8.199355086" Dec 12 18:43:15.920189 kubelet[2724]: E1212 18:43:15.920096 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:16.557626 sudo[1796]: pam_unix(sudo:session): session closed for user root Dec 12 18:43:16.610609 sshd[1795]: Connection closed by 139.178.68.195 port 59560 Dec 12 18:43:16.610913 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Dec 12 18:43:16.616166 systemd[1]: sshd@6-172.239.194.183:22-139.178.68.195:59560.service: Deactivated successfully. Dec 12 18:43:16.619380 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:43:16.620943 systemd[1]: session-7.scope: Consumed 4.563s CPU time, 232.8M memory peak. Dec 12 18:43:16.624445 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:43:16.626554 systemd-logind[1526]: Removed session 7. Dec 12 18:43:19.178986 update_engine[1528]: I20251212 18:43:19.178151 1528 update_attempter.cc:509] Updating boot flags... Dec 12 18:43:21.038589 systemd[1]: Created slice kubepods-besteffort-podc8032bf1_070a_4394_8300_c070f5d20ed4.slice - libcontainer container kubepods-besteffort-podc8032bf1_070a_4394_8300_c070f5d20ed4.slice. Dec 12 18:43:21.119129 kubelet[2724]: I1212 18:43:21.119055 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8032bf1-070a-4394-8300-c070f5d20ed4-tigera-ca-bundle\") pod \"calico-typha-564b6c54c6-2jp52\" (UID: \"c8032bf1-070a-4394-8300-c070f5d20ed4\") " pod="calico-system/calico-typha-564b6c54c6-2jp52" Dec 12 18:43:21.119960 kubelet[2724]: I1212 18:43:21.119757 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57bt\" (UniqueName: \"kubernetes.io/projected/c8032bf1-070a-4394-8300-c070f5d20ed4-kube-api-access-p57bt\") pod \"calico-typha-564b6c54c6-2jp52\" (UID: \"c8032bf1-070a-4394-8300-c070f5d20ed4\") " pod="calico-system/calico-typha-564b6c54c6-2jp52" Dec 12 18:43:21.119960 kubelet[2724]: I1212 18:43:21.119858 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c8032bf1-070a-4394-8300-c070f5d20ed4-typha-certs\") pod \"calico-typha-564b6c54c6-2jp52\" (UID: \"c8032bf1-070a-4394-8300-c070f5d20ed4\") " pod="calico-system/calico-typha-564b6c54c6-2jp52" Dec 12 18:43:21.263174 systemd[1]: Created slice kubepods-besteffort-pod309bb533_8f96_4dc0_89a1_12323e44fd45.slice - libcontainer container kubepods-besteffort-pod309bb533_8f96_4dc0_89a1_12323e44fd45.slice. Dec 12 18:43:21.323852 kubelet[2724]: I1212 18:43:21.323333 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-flexvol-driver-host\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.323852 kubelet[2724]: I1212 18:43:21.323401 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-policysync\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.323852 kubelet[2724]: I1212 18:43:21.323423 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-cni-bin-dir\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.323852 kubelet[2724]: I1212 18:43:21.323441 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/309bb533-8f96-4dc0-89a1-12323e44fd45-tigera-ca-bundle\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.323852 kubelet[2724]: I1212 18:43:21.323462 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/309bb533-8f96-4dc0-89a1-12323e44fd45-node-certs\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324184 kubelet[2724]: I1212 18:43:21.323480 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-xtables-lock\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324184 kubelet[2724]: I1212 18:43:21.323497 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfrn\" (UniqueName: \"kubernetes.io/projected/309bb533-8f96-4dc0-89a1-12323e44fd45-kube-api-access-cnfrn\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324184 kubelet[2724]: I1212 18:43:21.323517 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-cni-log-dir\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324184 kubelet[2724]: I1212 18:43:21.323551 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-var-run-calico\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324184 kubelet[2724]: I1212 18:43:21.323568 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-var-lib-calico\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324325 kubelet[2724]: I1212 18:43:21.323588 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-cni-net-dir\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.324325 kubelet[2724]: I1212 18:43:21.323605 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/309bb533-8f96-4dc0-89a1-12323e44fd45-lib-modules\") pod \"calico-node-5mwl6\" (UID: \"309bb533-8f96-4dc0-89a1-12323e44fd45\") " pod="calico-system/calico-node-5mwl6" Dec 12 18:43:21.350969 kubelet[2724]: E1212 18:43:21.350922 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:21.353776 containerd[1559]: time="2025-12-12T18:43:21.353636785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564b6c54c6-2jp52,Uid:c8032bf1-070a-4394-8300-c070f5d20ed4,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:21.375144 containerd[1559]: time="2025-12-12T18:43:21.375002373Z" level=info msg="connecting to shim 2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5" address="unix:///run/containerd/s/9fc431ce040cc6c010c2654982a2e5836d93cc331c33df277077ca6013e3c110" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:21.407590 systemd[1]: Started cri-containerd-2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5.scope - libcontainer container 2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5. Dec 12 18:43:21.427895 kubelet[2724]: E1212 18:43:21.427825 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.427895 kubelet[2724]: W1212 18:43:21.427877 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.427895 kubelet[2724]: E1212 18:43:21.427906 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.443638 kubelet[2724]: E1212 18:43:21.443502 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.443638 kubelet[2724]: W1212 18:43:21.443534 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.443917 kubelet[2724]: E1212 18:43:21.443793 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.459080 kubelet[2724]: E1212 18:43:21.458918 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.459080 kubelet[2724]: W1212 18:43:21.458948 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.459080 kubelet[2724]: E1212 18:43:21.458973 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.477456 kubelet[2724]: E1212 18:43:21.477341 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:21.506051 kubelet[2724]: E1212 18:43:21.505977 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.506051 kubelet[2724]: W1212 18:43:21.506005 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.506420 kubelet[2724]: E1212 18:43:21.506215 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.507084 kubelet[2724]: E1212 18:43:21.507011 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.507084 kubelet[2724]: W1212 18:43:21.507025 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.507084 kubelet[2724]: E1212 18:43:21.507039 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.508570 kubelet[2724]: E1212 18:43:21.508556 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.508823 kubelet[2724]: W1212 18:43:21.508619 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.508823 kubelet[2724]: E1212 18:43:21.508637 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.509577 kubelet[2724]: E1212 18:43:21.509474 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.509577 kubelet[2724]: W1212 18:43:21.509502 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.509577 kubelet[2724]: E1212 18:43:21.509515 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.510942 kubelet[2724]: E1212 18:43:21.510858 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.511082 kubelet[2724]: W1212 18:43:21.511066 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.511217 kubelet[2724]: E1212 18:43:21.511150 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.513003 kubelet[2724]: E1212 18:43:21.512876 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.513003 kubelet[2724]: W1212 18:43:21.512891 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.513003 kubelet[2724]: E1212 18:43:21.512930 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.514000 kubelet[2724]: E1212 18:43:21.513657 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.514000 kubelet[2724]: W1212 18:43:21.513721 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.514000 kubelet[2724]: E1212 18:43:21.513734 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.514381 kubelet[2724]: E1212 18:43:21.514263 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.514381 kubelet[2724]: W1212 18:43:21.514297 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.514381 kubelet[2724]: E1212 18:43:21.514308 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.515097 kubelet[2724]: E1212 18:43:21.514985 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.515428 kubelet[2724]: W1212 18:43:21.515401 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.515684 kubelet[2724]: E1212 18:43:21.515501 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.516826 kubelet[2724]: E1212 18:43:21.516780 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.517146 kubelet[2724]: W1212 18:43:21.516894 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.517146 kubelet[2724]: E1212 18:43:21.516910 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.518005 kubelet[2724]: E1212 18:43:21.517934 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.518005 kubelet[2724]: W1212 18:43:21.517946 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.518005 kubelet[2724]: E1212 18:43:21.517957 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.518840 kubelet[2724]: E1212 18:43:21.518828 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.518925 kubelet[2724]: W1212 18:43:21.518914 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.519003 kubelet[2724]: E1212 18:43:21.518992 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.519707 kubelet[2724]: E1212 18:43:21.519695 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.519864 kubelet[2724]: W1212 18:43:21.519851 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.519950 kubelet[2724]: E1212 18:43:21.519938 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.520546 kubelet[2724]: E1212 18:43:21.520534 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.520835 kubelet[2724]: W1212 18:43:21.520756 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.520835 kubelet[2724]: E1212 18:43:21.520771 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.521491 kubelet[2724]: E1212 18:43:21.521479 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.521597 kubelet[2724]: W1212 18:43:21.521556 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.521798 kubelet[2724]: E1212 18:43:21.521692 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.522421 kubelet[2724]: E1212 18:43:21.522409 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.522647 kubelet[2724]: W1212 18:43:21.522542 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.522647 kubelet[2724]: E1212 18:43:21.522555 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.523863 kubelet[2724]: E1212 18:43:21.523814 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.524245 kubelet[2724]: W1212 18:43:21.523830 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.524245 kubelet[2724]: E1212 18:43:21.524058 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.525566 kubelet[2724]: E1212 18:43:21.525403 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.525566 kubelet[2724]: W1212 18:43:21.525416 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.525566 kubelet[2724]: E1212 18:43:21.525426 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.526535 kubelet[2724]: E1212 18:43:21.526522 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.527014 kubelet[2724]: W1212 18:43:21.526999 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.527227 kubelet[2724]: E1212 18:43:21.527151 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.528211 kubelet[2724]: E1212 18:43:21.528074 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.529231 kubelet[2724]: W1212 18:43:21.528396 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.529231 kubelet[2724]: E1212 18:43:21.528414 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.529896 kubelet[2724]: E1212 18:43:21.529883 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.530285 kubelet[2724]: W1212 18:43:21.530272 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.530377 kubelet[2724]: E1212 18:43:21.530365 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.531665 kubelet[2724]: I1212 18:43:21.531429 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20f1b0-b34b-412e-a0a1-b4c0cada074e-kubelet-dir\") pod \"csi-node-driver-n46fl\" (UID: \"af20f1b0-b34b-412e-a0a1-b4c0cada074e\") " pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:21.534879 kubelet[2724]: E1212 18:43:21.534818 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.534879 kubelet[2724]: W1212 18:43:21.534874 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.535180 kubelet[2724]: E1212 18:43:21.534910 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.535180 kubelet[2724]: I1212 18:43:21.535037 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjpm\" (UniqueName: \"kubernetes.io/projected/af20f1b0-b34b-412e-a0a1-b4c0cada074e-kube-api-access-ctjpm\") pod \"csi-node-driver-n46fl\" (UID: \"af20f1b0-b34b-412e-a0a1-b4c0cada074e\") " pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:21.535567 kubelet[2724]: E1212 18:43:21.535524 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.535642 kubelet[2724]: W1212 18:43:21.535615 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.535714 kubelet[2724]: E1212 18:43:21.535701 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.536170 kubelet[2724]: E1212 18:43:21.536074 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.536170 kubelet[2724]: W1212 18:43:21.536086 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.536170 kubelet[2724]: E1212 18:43:21.536096 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.536802 kubelet[2724]: E1212 18:43:21.536688 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.536802 kubelet[2724]: W1212 18:43:21.536725 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.536802 kubelet[2724]: E1212 18:43:21.536737 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.536802 kubelet[2724]: I1212 18:43:21.536770 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af20f1b0-b34b-412e-a0a1-b4c0cada074e-socket-dir\") pod \"csi-node-driver-n46fl\" (UID: \"af20f1b0-b34b-412e-a0a1-b4c0cada074e\") " pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:21.537375 kubelet[2724]: E1212 18:43:21.537363 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.537471 kubelet[2724]: W1212 18:43:21.537442 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.537471 kubelet[2724]: E1212 18:43:21.537457 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.537897 kubelet[2724]: E1212 18:43:21.537865 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.537988 kubelet[2724]: W1212 18:43:21.537942 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.537988 kubelet[2724]: E1212 18:43:21.537955 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.538740 kubelet[2724]: E1212 18:43:21.538713 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.538918 kubelet[2724]: W1212 18:43:21.538807 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.538918 kubelet[2724]: E1212 18:43:21.538821 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.538918 kubelet[2724]: I1212 18:43:21.538845 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/af20f1b0-b34b-412e-a0a1-b4c0cada074e-varrun\") pod \"csi-node-driver-n46fl\" (UID: \"af20f1b0-b34b-412e-a0a1-b4c0cada074e\") " pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:21.539376 kubelet[2724]: E1212 18:43:21.539342 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.539376 kubelet[2724]: W1212 18:43:21.539355 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.539376 kubelet[2724]: E1212 18:43:21.539364 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.539675 kubelet[2724]: I1212 18:43:21.539573 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af20f1b0-b34b-412e-a0a1-b4c0cada074e-registration-dir\") pod \"csi-node-driver-n46fl\" (UID: \"af20f1b0-b34b-412e-a0a1-b4c0cada074e\") " pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:21.540023 kubelet[2724]: E1212 18:43:21.539999 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.540190 kubelet[2724]: W1212 18:43:21.540055 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.540190 kubelet[2724]: E1212 18:43:21.540065 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.540530 kubelet[2724]: E1212 18:43:21.540514 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.540530 kubelet[2724]: W1212 18:43:21.540557 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.540530 kubelet[2724]: E1212 18:43:21.540567 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.541012 kubelet[2724]: E1212 18:43:21.540999 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.541165 kubelet[2724]: W1212 18:43:21.541084 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.541244 kubelet[2724]: E1212 18:43:21.541100 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.541671 kubelet[2724]: E1212 18:43:21.541625 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.541798 kubelet[2724]: W1212 18:43:21.541637 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.541798 kubelet[2724]: E1212 18:43:21.541767 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.542335 kubelet[2724]: E1212 18:43:21.542323 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.542461 kubelet[2724]: W1212 18:43:21.542424 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.542547 kubelet[2724]: E1212 18:43:21.542509 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.542896 kubelet[2724]: E1212 18:43:21.542858 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.542996 kubelet[2724]: W1212 18:43:21.542957 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.542996 kubelet[2724]: E1212 18:43:21.542970 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.552194 containerd[1559]: time="2025-12-12T18:43:21.552076495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564b6c54c6-2jp52,Uid:c8032bf1-070a-4394-8300-c070f5d20ed4,Namespace:calico-system,Attempt:0,} returns sandbox id \"2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5\"" Dec 12 18:43:21.553830 kubelet[2724]: E1212 18:43:21.553773 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:21.555143 containerd[1559]: time="2025-12-12T18:43:21.555051767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:43:21.568778 kubelet[2724]: E1212 18:43:21.568715 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:21.570141 containerd[1559]: time="2025-12-12T18:43:21.569474975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5mwl6,Uid:309bb533-8f96-4dc0-89a1-12323e44fd45,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:21.588236 containerd[1559]: time="2025-12-12T18:43:21.587817929Z" level=info msg="connecting to shim e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22" address="unix:///run/containerd/s/e3ba5a767d989017f35d8504ae605c0777e63a151b1d26807ff62f51cc61b116" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:21.637448 systemd[1]: Started cri-containerd-e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22.scope - libcontainer container e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22. Dec 12 18:43:21.641922 kubelet[2724]: E1212 18:43:21.641422 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.641922 kubelet[2724]: W1212 18:43:21.641877 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.642565 kubelet[2724]: E1212 18:43:21.642273 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.643686 kubelet[2724]: E1212 18:43:21.643617 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.643686 kubelet[2724]: W1212 18:43:21.643631 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.650558 kubelet[2724]: E1212 18:43:21.643647 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.650658 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654782 kubelet[2724]: W1212 18:43:21.650669 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.650685 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.651301 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654782 kubelet[2724]: W1212 18:43:21.651311 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.651323 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.652323 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654782 kubelet[2724]: W1212 18:43:21.652335 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.652345 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654782 kubelet[2724]: E1212 18:43:21.652573 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654995 kubelet[2724]: W1212 18:43:21.652583 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.652595 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.652828 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654995 kubelet[2724]: W1212 18:43:21.652838 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.652848 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.653270 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654995 kubelet[2724]: W1212 18:43:21.653279 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.653291 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.654995 kubelet[2724]: E1212 18:43:21.653949 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.654995 kubelet[2724]: W1212 18:43:21.653958 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.657032 kubelet[2724]: E1212 18:43:21.653967 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.657032 kubelet[2724]: E1212 18:43:21.654551 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.657032 kubelet[2724]: W1212 18:43:21.654561 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.657032 kubelet[2724]: E1212 18:43:21.654571 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.657032 kubelet[2724]: E1212 18:43:21.655927 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.657032 kubelet[2724]: W1212 18:43:21.655939 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.657032 kubelet[2724]: E1212 18:43:21.655950 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.657430 kubelet[2724]: E1212 18:43:21.657206 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.657430 kubelet[2724]: W1212 18:43:21.657216 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.657430 kubelet[2724]: E1212 18:43:21.657252 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.657694 kubelet[2724]: E1212 18:43:21.657680 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.657844 kubelet[2724]: W1212 18:43:21.657753 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.657844 kubelet[2724]: E1212 18:43:21.657767 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.658861 kubelet[2724]: E1212 18:43:21.658826 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.658861 kubelet[2724]: W1212 18:43:21.658837 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.658861 kubelet[2724]: E1212 18:43:21.658848 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.659913 kubelet[2724]: E1212 18:43:21.659543 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.659913 kubelet[2724]: W1212 18:43:21.659555 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.659913 kubelet[2724]: E1212 18:43:21.659565 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.660507 kubelet[2724]: E1212 18:43:21.660394 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.660641 kubelet[2724]: W1212 18:43:21.660569 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.660812 kubelet[2724]: E1212 18:43:21.660717 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.661507 kubelet[2724]: E1212 18:43:21.661348 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.661507 kubelet[2724]: W1212 18:43:21.661360 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.661507 kubelet[2724]: E1212 18:43:21.661369 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.662345 kubelet[2724]: E1212 18:43:21.662249 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.662345 kubelet[2724]: W1212 18:43:21.662274 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.662345 kubelet[2724]: E1212 18:43:21.662284 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.662985 kubelet[2724]: E1212 18:43:21.662947 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.662985 kubelet[2724]: W1212 18:43:21.662959 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.662985 kubelet[2724]: E1212 18:43:21.662969 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.663615 kubelet[2724]: E1212 18:43:21.663572 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.663615 kubelet[2724]: W1212 18:43:21.663585 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.663615 kubelet[2724]: E1212 18:43:21.663594 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.664843 kubelet[2724]: E1212 18:43:21.664774 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.664843 kubelet[2724]: W1212 18:43:21.664786 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.664843 kubelet[2724]: E1212 18:43:21.664797 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.665448 kubelet[2724]: E1212 18:43:21.665378 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.665448 kubelet[2724]: W1212 18:43:21.665390 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.665448 kubelet[2724]: E1212 18:43:21.665400 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.666352 kubelet[2724]: E1212 18:43:21.666309 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.666352 kubelet[2724]: W1212 18:43:21.666322 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.666352 kubelet[2724]: E1212 18:43:21.666334 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.667197 kubelet[2724]: E1212 18:43:21.667165 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.667197 kubelet[2724]: W1212 18:43:21.667176 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.667197 kubelet[2724]: E1212 18:43:21.667185 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.667918 kubelet[2724]: E1212 18:43:21.667617 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.667918 kubelet[2724]: W1212 18:43:21.667629 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.667918 kubelet[2724]: E1212 18:43:21.667686 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.683297 kubelet[2724]: E1212 18:43:21.683197 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:21.683297 kubelet[2724]: W1212 18:43:21.683224 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:21.683297 kubelet[2724]: E1212 18:43:21.683250 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:21.727151 containerd[1559]: time="2025-12-12T18:43:21.726539893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5mwl6,Uid:309bb533-8f96-4dc0-89a1-12323e44fd45,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\"" Dec 12 18:43:21.727943 kubelet[2724]: E1212 18:43:21.727909 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:22.710484 containerd[1559]: time="2025-12-12T18:43:22.710445131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:22.711417 containerd[1559]: time="2025-12-12T18:43:22.711275418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:43:22.711903 containerd[1559]: time="2025-12-12T18:43:22.711880165Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:22.713369 containerd[1559]: time="2025-12-12T18:43:22.713345501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:22.714065 containerd[1559]: time="2025-12-12T18:43:22.714043932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.158959633s" Dec 12 18:43:22.714155 containerd[1559]: time="2025-12-12T18:43:22.714140146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:43:22.715173 containerd[1559]: time="2025-12-12T18:43:22.715089538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:43:22.731080 containerd[1559]: time="2025-12-12T18:43:22.730963667Z" level=info msg="CreateContainer within sandbox \"2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:43:22.736826 containerd[1559]: time="2025-12-12T18:43:22.736371418Z" level=info msg="Container 86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:22.746374 containerd[1559]: time="2025-12-12T18:43:22.746339963Z" level=info msg="CreateContainer within sandbox \"2984150a400df9265116575d2e2be1f2c6eed5c4bc8f941d8ac1d22cfe17ffb5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49\"" Dec 12 18:43:22.746868 containerd[1559]: time="2025-12-12T18:43:22.746827324Z" level=info msg="StartContainer for \"86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49\"" Dec 12 18:43:22.748707 containerd[1559]: time="2025-12-12T18:43:22.748658947Z" level=info msg="connecting to shim 86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49" address="unix:///run/containerd/s/9fc431ce040cc6c010c2654982a2e5836d93cc331c33df277077ca6013e3c110" protocol=ttrpc version=3 Dec 12 18:43:22.768448 systemd[1]: Started cri-containerd-86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49.scope - libcontainer container 86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49. Dec 12 18:43:22.823209 containerd[1559]: time="2025-12-12T18:43:22.823100389Z" level=info msg="StartContainer for \"86f2bf170ec35a4bc47cd2bbbfcc147dcd7cf3360f3a7e82c1ee374318d0ce49\" returns successfully" Dec 12 18:43:23.006858 kubelet[2724]: E1212 18:43:23.006829 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:23.025657 kubelet[2724]: I1212 18:43:23.024516 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-564b6c54c6-2jp52" podStartSLOduration=1.864398556 podStartE2EDuration="3.024500951s" podCreationTimestamp="2025-12-12 18:43:20 +0000 UTC" firstStartedPulling="2025-12-12 18:43:21.554627567 +0000 UTC m=+18.770954362" lastFinishedPulling="2025-12-12 18:43:22.714729962 +0000 UTC m=+19.931056757" observedRunningTime="2025-12-12 18:43:23.024353165 +0000 UTC m=+20.240679960" watchObservedRunningTime="2025-12-12 18:43:23.024500951 +0000 UTC m=+20.240827746" Dec 12 18:43:23.040148 kubelet[2724]: E1212 18:43:23.040060 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.041266 kubelet[2724]: W1212 18:43:23.041142 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.041266 kubelet[2724]: E1212 18:43:23.041170 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.041606 kubelet[2724]: E1212 18:43:23.041572 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.041606 kubelet[2724]: W1212 18:43:23.041585 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.041759 kubelet[2724]: E1212 18:43:23.041710 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.042186 kubelet[2724]: E1212 18:43:23.042139 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.042366 kubelet[2724]: W1212 18:43:23.042353 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.042513 kubelet[2724]: E1212 18:43:23.042501 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.043618 kubelet[2724]: E1212 18:43:23.043526 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.043618 kubelet[2724]: W1212 18:43:23.043539 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.043618 kubelet[2724]: E1212 18:43:23.043550 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.045021 kubelet[2724]: E1212 18:43:23.044920 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.045021 kubelet[2724]: W1212 18:43:23.044953 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.045021 kubelet[2724]: E1212 18:43:23.044964 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.045606 kubelet[2724]: E1212 18:43:23.045538 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.046866 kubelet[2724]: W1212 18:43:23.046755 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.046866 kubelet[2724]: E1212 18:43:23.046774 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.047028 kubelet[2724]: E1212 18:43:23.046994 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.047028 kubelet[2724]: W1212 18:43:23.047011 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.047028 kubelet[2724]: E1212 18:43:23.047023 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.048149 kubelet[2724]: E1212 18:43:23.048132 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.048149 kubelet[2724]: W1212 18:43:23.048147 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.048218 kubelet[2724]: E1212 18:43:23.048156 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.048369 kubelet[2724]: E1212 18:43:23.048343 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.048369 kubelet[2724]: W1212 18:43:23.048356 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.048369 kubelet[2724]: E1212 18:43:23.048366 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.048770 kubelet[2724]: E1212 18:43:23.048535 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.048770 kubelet[2724]: W1212 18:43:23.048736 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.048770 kubelet[2724]: E1212 18:43:23.048744 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.049150 kubelet[2724]: E1212 18:43:23.048908 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.049150 kubelet[2724]: W1212 18:43:23.048918 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.049150 kubelet[2724]: E1212 18:43:23.048925 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.049150 kubelet[2724]: E1212 18:43:23.049123 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.049150 kubelet[2724]: W1212 18:43:23.049131 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.049150 kubelet[2724]: E1212 18:43:23.049139 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.050178 kubelet[2724]: E1212 18:43:23.049313 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.050178 kubelet[2724]: W1212 18:43:23.049323 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.050178 kubelet[2724]: E1212 18:43:23.049330 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.050178 kubelet[2724]: E1212 18:43:23.049512 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.050178 kubelet[2724]: W1212 18:43:23.049519 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.050178 kubelet[2724]: E1212 18:43:23.049527 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.051168 kubelet[2724]: E1212 18:43:23.051150 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.051168 kubelet[2724]: W1212 18:43:23.051165 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.051239 kubelet[2724]: E1212 18:43:23.051173 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.070685 kubelet[2724]: E1212 18:43:23.070656 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.070685 kubelet[2724]: W1212 18:43:23.070679 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.070685 kubelet[2724]: E1212 18:43:23.070700 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.073701 kubelet[2724]: E1212 18:43:23.073173 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.073701 kubelet[2724]: W1212 18:43:23.073191 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.073701 kubelet[2724]: E1212 18:43:23.073207 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.073988 kubelet[2724]: E1212 18:43:23.073862 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.073988 kubelet[2724]: W1212 18:43:23.073875 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.073988 kubelet[2724]: E1212 18:43:23.073885 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.074368 kubelet[2724]: E1212 18:43:23.074329 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.074368 kubelet[2724]: W1212 18:43:23.074340 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.074368 kubelet[2724]: E1212 18:43:23.074349 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.074971 kubelet[2724]: E1212 18:43:23.074627 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.074971 kubelet[2724]: W1212 18:43:23.074636 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.074971 kubelet[2724]: E1212 18:43:23.074643 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.076858 kubelet[2724]: E1212 18:43:23.076195 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.076858 kubelet[2724]: W1212 18:43:23.076210 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.076954 kubelet[2724]: E1212 18:43:23.076935 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.078225 kubelet[2724]: E1212 18:43:23.078165 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.081148 kubelet[2724]: W1212 18:43:23.081130 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.081216 kubelet[2724]: E1212 18:43:23.081204 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.081531 kubelet[2724]: E1212 18:43:23.081519 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.081610 kubelet[2724]: W1212 18:43:23.081599 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.081669 kubelet[2724]: E1212 18:43:23.081647 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.083240 kubelet[2724]: E1212 18:43:23.083226 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.083312 kubelet[2724]: W1212 18:43:23.083299 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.083376 kubelet[2724]: E1212 18:43:23.083365 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.083970 kubelet[2724]: E1212 18:43:23.083952 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.084198 kubelet[2724]: W1212 18:43:23.084185 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.084266 kubelet[2724]: E1212 18:43:23.084256 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.085631 kubelet[2724]: E1212 18:43:23.085618 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.088141 kubelet[2724]: W1212 18:43:23.087063 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.088141 kubelet[2724]: E1212 18:43:23.087081 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.091405 kubelet[2724]: E1212 18:43:23.091366 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.091405 kubelet[2724]: W1212 18:43:23.091379 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.091554 kubelet[2724]: E1212 18:43:23.091482 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.092032 kubelet[2724]: E1212 18:43:23.091988 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.092032 kubelet[2724]: W1212 18:43:23.091999 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.092032 kubelet[2724]: E1212 18:43:23.092009 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.092473 kubelet[2724]: E1212 18:43:23.092424 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.092473 kubelet[2724]: W1212 18:43:23.092434 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.092473 kubelet[2724]: E1212 18:43:23.092444 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.093557 kubelet[2724]: E1212 18:43:23.093515 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.093557 kubelet[2724]: W1212 18:43:23.093526 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.093557 kubelet[2724]: E1212 18:43:23.093535 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.095131 kubelet[2724]: E1212 18:43:23.095050 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.095131 kubelet[2724]: W1212 18:43:23.095061 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.095131 kubelet[2724]: E1212 18:43:23.095070 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.096099 kubelet[2724]: E1212 18:43:23.096066 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.096099 kubelet[2724]: W1212 18:43:23.096078 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.096421 kubelet[2724]: E1212 18:43:23.096392 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.098180 kubelet[2724]: E1212 18:43:23.098159 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:43:23.098358 kubelet[2724]: W1212 18:43:23.098347 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:43:23.098430 kubelet[2724]: E1212 18:43:23.098419 2724 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:43:23.327358 containerd[1559]: time="2025-12-12T18:43:23.327228179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:23.328797 containerd[1559]: time="2025-12-12T18:43:23.328762823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:43:23.330769 containerd[1559]: time="2025-12-12T18:43:23.329961893Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:23.332360 containerd[1559]: time="2025-12-12T18:43:23.332323792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:23.332972 containerd[1559]: time="2025-12-12T18:43:23.332938257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 617.759335ms" Dec 12 18:43:23.333057 containerd[1559]: time="2025-12-12T18:43:23.333039842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:43:23.339250 containerd[1559]: time="2025-12-12T18:43:23.339218770Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:43:23.351423 containerd[1559]: time="2025-12-12T18:43:23.351381178Z" level=info msg="Container 26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:23.356346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766692595.mount: Deactivated successfully. Dec 12 18:43:23.365444 containerd[1559]: time="2025-12-12T18:43:23.365396444Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f\"" Dec 12 18:43:23.366170 containerd[1559]: time="2025-12-12T18:43:23.366142955Z" level=info msg="StartContainer for \"26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f\"" Dec 12 18:43:23.367395 containerd[1559]: time="2025-12-12T18:43:23.367357246Z" level=info msg="connecting to shim 26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f" address="unix:///run/containerd/s/e3ba5a767d989017f35d8504ae605c0777e63a151b1d26807ff62f51cc61b116" protocol=ttrpc version=3 Dec 12 18:43:23.401263 systemd[1]: Started cri-containerd-26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f.scope - libcontainer container 26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f. Dec 12 18:43:23.484017 containerd[1559]: time="2025-12-12T18:43:23.483958257Z" level=info msg="StartContainer for \"26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f\" returns successfully" Dec 12 18:43:23.511045 systemd[1]: cri-containerd-26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f.scope: Deactivated successfully. Dec 12 18:43:23.514667 containerd[1559]: time="2025-12-12T18:43:23.514625309Z" level=info msg="received container exit event container_id:\"26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f\" id:\"26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f\" pid:3422 exited_at:{seconds:1765565003 nanos:514066145}" Dec 12 18:43:23.547035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26ee321e5846412990502c44d17784abadcdae52d41f0a2961f1cf29afc21c1f-rootfs.mount: Deactivated successfully. Dec 12 18:43:23.902329 kubelet[2724]: E1212 18:43:23.902271 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:24.013388 kubelet[2724]: I1212 18:43:24.013307 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:43:24.017386 kubelet[2724]: E1212 18:43:24.013703 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:24.017386 kubelet[2724]: E1212 18:43:24.014200 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:24.018991 containerd[1559]: time="2025-12-12T18:43:24.018942571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:43:25.814554 containerd[1559]: time="2025-12-12T18:43:25.814502370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:25.815676 containerd[1559]: time="2025-12-12T18:43:25.815635552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:43:25.816401 containerd[1559]: time="2025-12-12T18:43:25.816211922Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:25.818149 containerd[1559]: time="2025-12-12T18:43:25.818094501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:25.818749 containerd[1559]: time="2025-12-12T18:43:25.818727065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.799733262s" Dec 12 18:43:25.818827 containerd[1559]: time="2025-12-12T18:43:25.818813268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:43:25.822797 containerd[1559]: time="2025-12-12T18:43:25.822771263Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:43:25.833263 containerd[1559]: time="2025-12-12T18:43:25.833234595Z" level=info msg="Container 7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:25.839861 containerd[1559]: time="2025-12-12T18:43:25.839829946Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29\"" Dec 12 18:43:25.840566 containerd[1559]: time="2025-12-12T18:43:25.840356996Z" level=info msg="StartContainer for \"7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29\"" Dec 12 18:43:25.841629 containerd[1559]: time="2025-12-12T18:43:25.841608592Z" level=info msg="connecting to shim 7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29" address="unix:///run/containerd/s/e3ba5a767d989017f35d8504ae605c0777e63a151b1d26807ff62f51cc61b116" protocol=ttrpc version=3 Dec 12 18:43:25.871239 systemd[1]: Started cri-containerd-7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29.scope - libcontainer container 7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29. Dec 12 18:43:25.902315 kubelet[2724]: E1212 18:43:25.902179 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:25.955300 containerd[1559]: time="2025-12-12T18:43:25.955256841Z" level=info msg="StartContainer for \"7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29\" returns successfully" Dec 12 18:43:26.024496 kubelet[2724]: E1212 18:43:26.024458 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:26.531356 containerd[1559]: time="2025-12-12T18:43:26.531291688Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:43:26.534997 systemd[1]: cri-containerd-7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29.scope: Deactivated successfully. Dec 12 18:43:26.535425 systemd[1]: cri-containerd-7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29.scope: Consumed 599ms CPU time, 194.9M memory peak, 171.3M written to disk. Dec 12 18:43:26.539372 containerd[1559]: time="2025-12-12T18:43:26.538926729Z" level=info msg="received container exit event container_id:\"7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29\" id:\"7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29\" pid:3481 exited_at:{seconds:1765565006 nanos:537930515}" Dec 12 18:43:26.557509 kubelet[2724]: I1212 18:43:26.557061 2724 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:43:26.576300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8c48592da281b5ad97fc49735ce5299c5bb4fd1473cd54597cad1c8b2c0c29-rootfs.mount: Deactivated successfully. Dec 12 18:43:26.616669 systemd[1]: Created slice kubepods-besteffort-pod0dd6b358_9691_4ff7_9c07_9faa3b6a5832.slice - libcontainer container kubepods-besteffort-pod0dd6b358_9691_4ff7_9c07_9faa3b6a5832.slice. Dec 12 18:43:26.644160 systemd[1]: Created slice kubepods-besteffort-pod307920b5_5337_43c8_8c09_6d8750b41212.slice - libcontainer container kubepods-besteffort-pod307920b5_5337_43c8_8c09_6d8750b41212.slice. Dec 12 18:43:26.677157 systemd[1]: Created slice kubepods-besteffort-pod0ba47eaa_f04d_4e71_87de_91abc04e7d96.slice - libcontainer container kubepods-besteffort-pod0ba47eaa_f04d_4e71_87de_91abc04e7d96.slice. Dec 12 18:43:26.687749 systemd[1]: Created slice kubepods-besteffort-pod885b0c3a_54d9_486d_a986_7d34c5be0f3c.slice - libcontainer container kubepods-besteffort-pod885b0c3a_54d9_486d_a986_7d34c5be0f3c.slice. Dec 12 18:43:26.698680 kubelet[2724]: I1212 18:43:26.698653 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/307920b5-5337-43c8-8c09-6d8750b41212-calico-apiserver-certs\") pod \"calico-apiserver-6db8fdd69c-9p2sj\" (UID: \"307920b5-5337-43c8-8c09-6d8750b41212\") " pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" Dec 12 18:43:26.699177 kubelet[2724]: I1212 18:43:26.698997 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6mw\" (UniqueName: \"kubernetes.io/projected/307920b5-5337-43c8-8c09-6d8750b41212-kube-api-access-2c6mw\") pod \"calico-apiserver-6db8fdd69c-9p2sj\" (UID: \"307920b5-5337-43c8-8c09-6d8750b41212\") " pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" Dec 12 18:43:26.699177 kubelet[2724]: I1212 18:43:26.699023 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-ca-bundle\") pod \"whisker-cdf974658-wt6d7\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " pod="calico-system/whisker-cdf974658-wt6d7" Dec 12 18:43:26.699177 kubelet[2724]: I1212 18:43:26.699040 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275fw\" (UniqueName: \"kubernetes.io/projected/885b0c3a-54d9-486d-a986-7d34c5be0f3c-kube-api-access-275fw\") pod \"whisker-cdf974658-wt6d7\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " pod="calico-system/whisker-cdf974658-wt6d7" Dec 12 18:43:26.699177 kubelet[2724]: I1212 18:43:26.699060 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxf6x\" (UniqueName: \"kubernetes.io/projected/83ec1740-f7bd-4f51-a6be-a16783749dd3-kube-api-access-zxf6x\") pod \"calico-apiserver-6db8fdd69c-npwn2\" (UID: \"83ec1740-f7bd-4f51-a6be-a16783749dd3\") " pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" Dec 12 18:43:26.699177 kubelet[2724]: I1212 18:43:26.699078 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m98f\" (UniqueName: \"kubernetes.io/projected/838e6b4b-522c-48a0-a381-d65e2006fa97-kube-api-access-4m98f\") pod \"coredns-674b8bbfcf-jczfs\" (UID: \"838e6b4b-522c-48a0-a381-d65e2006fa97\") " pod="kube-system/coredns-674b8bbfcf-jczfs" Dec 12 18:43:26.700934 systemd[1]: Created slice kubepods-besteffort-pod83ec1740_f7bd_4f51_a6be_a16783749dd3.slice - libcontainer container kubepods-besteffort-pod83ec1740_f7bd_4f51_a6be_a16783749dd3.slice. Dec 12 18:43:26.702867 kubelet[2724]: I1212 18:43:26.699095 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62cd2b62-6bbc-467f-ae85-488760759795-config-volume\") pod \"coredns-674b8bbfcf-jtmwl\" (UID: \"62cd2b62-6bbc-467f-ae85-488760759795\") " pod="kube-system/coredns-674b8bbfcf-jtmwl" Dec 12 18:43:26.703146 kubelet[2724]: I1212 18:43:26.702965 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd6b358-9691-4ff7-9c07-9faa3b6a5832-tigera-ca-bundle\") pod \"calico-kube-controllers-8d85bd9b7-nqzx2\" (UID: \"0dd6b358-9691-4ff7-9c07-9faa3b6a5832\") " pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" Dec 12 18:43:26.703239 kubelet[2724]: I1212 18:43:26.703224 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hmz\" (UniqueName: \"kubernetes.io/projected/0ba47eaa-f04d-4e71-87de-91abc04e7d96-kube-api-access-x8hmz\") pod \"goldmane-666569f655-rmclv\" (UID: \"0ba47eaa-f04d-4e71-87de-91abc04e7d96\") " pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:26.703309 kubelet[2724]: I1212 18:43:26.703298 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-backend-key-pair\") pod \"whisker-cdf974658-wt6d7\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " pod="calico-system/whisker-cdf974658-wt6d7" Dec 12 18:43:26.703383 kubelet[2724]: I1212 18:43:26.703363 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ba47eaa-f04d-4e71-87de-91abc04e7d96-goldmane-ca-bundle\") pod \"goldmane-666569f655-rmclv\" (UID: \"0ba47eaa-f04d-4e71-87de-91abc04e7d96\") " pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:26.703450 kubelet[2724]: I1212 18:43:26.703438 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83ec1740-f7bd-4f51-a6be-a16783749dd3-calico-apiserver-certs\") pod \"calico-apiserver-6db8fdd69c-npwn2\" (UID: \"83ec1740-f7bd-4f51-a6be-a16783749dd3\") " pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" Dec 12 18:43:26.703518 kubelet[2724]: I1212 18:43:26.703505 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/838e6b4b-522c-48a0-a381-d65e2006fa97-config-volume\") pod \"coredns-674b8bbfcf-jczfs\" (UID: \"838e6b4b-522c-48a0-a381-d65e2006fa97\") " pod="kube-system/coredns-674b8bbfcf-jczfs" Dec 12 18:43:26.704061 kubelet[2724]: I1212 18:43:26.704037 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqtl\" (UniqueName: \"kubernetes.io/projected/0dd6b358-9691-4ff7-9c07-9faa3b6a5832-kube-api-access-zhqtl\") pod \"calico-kube-controllers-8d85bd9b7-nqzx2\" (UID: \"0dd6b358-9691-4ff7-9c07-9faa3b6a5832\") " pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" Dec 12 18:43:26.704176 kubelet[2724]: I1212 18:43:26.704162 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba47eaa-f04d-4e71-87de-91abc04e7d96-config\") pod \"goldmane-666569f655-rmclv\" (UID: \"0ba47eaa-f04d-4e71-87de-91abc04e7d96\") " pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:26.704245 kubelet[2724]: I1212 18:43:26.704233 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0ba47eaa-f04d-4e71-87de-91abc04e7d96-goldmane-key-pair\") pod \"goldmane-666569f655-rmclv\" (UID: \"0ba47eaa-f04d-4e71-87de-91abc04e7d96\") " pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:26.704329 kubelet[2724]: I1212 18:43:26.704317 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4tp\" (UniqueName: \"kubernetes.io/projected/62cd2b62-6bbc-467f-ae85-488760759795-kube-api-access-dk4tp\") pod \"coredns-674b8bbfcf-jtmwl\" (UID: \"62cd2b62-6bbc-467f-ae85-488760759795\") " pod="kube-system/coredns-674b8bbfcf-jtmwl" Dec 12 18:43:26.711126 systemd[1]: Created slice kubepods-burstable-pod62cd2b62_6bbc_467f_ae85_488760759795.slice - libcontainer container kubepods-burstable-pod62cd2b62_6bbc_467f_ae85_488760759795.slice. Dec 12 18:43:26.722609 systemd[1]: Created slice kubepods-burstable-pod838e6b4b_522c_48a0_a381_d65e2006fa97.slice - libcontainer container kubepods-burstable-pod838e6b4b_522c_48a0_a381_d65e2006fa97.slice. Dec 12 18:43:26.932067 containerd[1559]: time="2025-12-12T18:43:26.930878883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d85bd9b7-nqzx2,Uid:0dd6b358-9691-4ff7-9c07-9faa3b6a5832,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:26.965878 containerd[1559]: time="2025-12-12T18:43:26.965362735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-9p2sj,Uid:307920b5-5337-43c8-8c09-6d8750b41212,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:43:26.982018 containerd[1559]: time="2025-12-12T18:43:26.981961233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rmclv,Uid:0ba47eaa-f04d-4e71-87de-91abc04e7d96,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:26.998540 containerd[1559]: time="2025-12-12T18:43:26.998488300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cdf974658-wt6d7,Uid:885b0c3a-54d9-486d-a986-7d34c5be0f3c,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:27.011411 containerd[1559]: time="2025-12-12T18:43:27.011340777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-npwn2,Uid:83ec1740-f7bd-4f51-a6be-a16783749dd3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:43:27.017059 kubelet[2724]: E1212 18:43:27.017017 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:27.018823 containerd[1559]: time="2025-12-12T18:43:27.018786516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jtmwl,Uid:62cd2b62-6bbc-467f-ae85-488760759795,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:27.027405 kubelet[2724]: E1212 18:43:27.027365 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:27.029935 containerd[1559]: time="2025-12-12T18:43:27.029730307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jczfs,Uid:838e6b4b-522c-48a0-a381-d65e2006fa97,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:27.046352 containerd[1559]: time="2025-12-12T18:43:27.046316438Z" level=error msg="Failed to destroy network for sandbox \"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.050135 kubelet[2724]: E1212 18:43:27.049346 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:27.057085 containerd[1559]: time="2025-12-12T18:43:27.057058082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:43:27.059415 containerd[1559]: time="2025-12-12T18:43:27.059383717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d85bd9b7-nqzx2,Uid:0dd6b358-9691-4ff7-9c07-9faa3b6a5832,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.060274 kubelet[2724]: E1212 18:43:27.060089 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.060836 kubelet[2724]: E1212 18:43:27.060672 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" Dec 12 18:43:27.060836 kubelet[2724]: E1212 18:43:27.060832 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" Dec 12 18:43:27.061402 kubelet[2724]: E1212 18:43:27.061360 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b748d904773f0e17fcc0f281c030d9658394e83b2a8b49fb0dbb708bc4166cd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:43:27.214867 containerd[1559]: time="2025-12-12T18:43:27.213921679Z" level=error msg="Failed to destroy network for sandbox \"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.215722 containerd[1559]: time="2025-12-12T18:43:27.215405377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cdf974658-wt6d7,Uid:885b0c3a-54d9-486d-a986-7d34c5be0f3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.215823 kubelet[2724]: E1212 18:43:27.215650 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.215823 kubelet[2724]: E1212 18:43:27.215700 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cdf974658-wt6d7" Dec 12 18:43:27.215823 kubelet[2724]: E1212 18:43:27.215718 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cdf974658-wt6d7" Dec 12 18:43:27.215936 kubelet[2724]: E1212 18:43:27.215761 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cdf974658-wt6d7_calico-system(885b0c3a-54d9-486d-a986-7d34c5be0f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cdf974658-wt6d7_calico-system(885b0c3a-54d9-486d-a986-7d34c5be0f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b917804ffddb3b89eff885678216434d351f752cf900e1b97912d4e5a561971\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cdf974658-wt6d7" podUID="885b0c3a-54d9-486d-a986-7d34c5be0f3c" Dec 12 18:43:27.219084 containerd[1559]: time="2025-12-12T18:43:27.219026473Z" level=error msg="Failed to destroy network for sandbox \"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.223580 containerd[1559]: time="2025-12-12T18:43:27.223544868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-9p2sj,Uid:307920b5-5337-43c8-8c09-6d8750b41212,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.223888 kubelet[2724]: E1212 18:43:27.223852 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.223936 kubelet[2724]: E1212 18:43:27.223894 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" Dec 12 18:43:27.223936 kubelet[2724]: E1212 18:43:27.223911 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" Dec 12 18:43:27.224027 kubelet[2724]: E1212 18:43:27.223977 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"601095deffe82c13101d619d59290d9519c71f20cf8acac3b804895c677e8d95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:43:27.226658 containerd[1559]: time="2025-12-12T18:43:27.226561985Z" level=error msg="Failed to destroy network for sandbox \"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.226886 containerd[1559]: time="2025-12-12T18:43:27.226841814Z" level=error msg="Failed to destroy network for sandbox \"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.227752 containerd[1559]: time="2025-12-12T18:43:27.227521225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jtmwl,Uid:62cd2b62-6bbc-467f-ae85-488760759795,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.228202 containerd[1559]: time="2025-12-12T18:43:27.228156085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-npwn2,Uid:83ec1740-f7bd-4f51-a6be-a16783749dd3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.228855 kubelet[2724]: E1212 18:43:27.228564 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.228855 kubelet[2724]: E1212 18:43:27.228602 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jtmwl" Dec 12 18:43:27.228855 kubelet[2724]: E1212 18:43:27.228618 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jtmwl" Dec 12 18:43:27.228963 kubelet[2724]: E1212 18:43:27.228647 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jtmwl_kube-system(62cd2b62-6bbc-467f-ae85-488760759795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jtmwl_kube-system(62cd2b62-6bbc-467f-ae85-488760759795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd0fddc39e6041c1642c25c229c599fd49cde1db7a6b66ea2438550cefd2a0d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jtmwl" podUID="62cd2b62-6bbc-467f-ae85-488760759795" Dec 12 18:43:27.229169 kubelet[2724]: E1212 18:43:27.229136 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.229229 kubelet[2724]: E1212 18:43:27.229172 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" Dec 12 18:43:27.229229 kubelet[2724]: E1212 18:43:27.229187 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" Dec 12 18:43:27.229229 kubelet[2724]: E1212 18:43:27.229214 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5ae1a80388c7ff276cde4821413ec4615dc1c088fe00fc9b68d677c43cb608f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:43:27.232082 containerd[1559]: time="2025-12-12T18:43:27.232047680Z" level=error msg="Failed to destroy network for sandbox \"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.232949 containerd[1559]: time="2025-12-12T18:43:27.232874206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rmclv,Uid:0ba47eaa-f04d-4e71-87de-91abc04e7d96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.233900 kubelet[2724]: E1212 18:43:27.233864 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.233988 kubelet[2724]: E1212 18:43:27.233928 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:27.234050 kubelet[2724]: E1212 18:43:27.233969 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rmclv" Dec 12 18:43:27.234241 kubelet[2724]: E1212 18:43:27.234202 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52ac44efccdaacd24c637f0162607969d700e2a81716492f4a70f448e132c29c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:43:27.256884 containerd[1559]: time="2025-12-12T18:43:27.256820765Z" level=error msg="Failed to destroy network for sandbox \"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.258520 containerd[1559]: time="2025-12-12T18:43:27.258472667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jczfs,Uid:838e6b4b-522c-48a0-a381-d65e2006fa97,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.258833 kubelet[2724]: E1212 18:43:27.258783 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:27.259075 kubelet[2724]: E1212 18:43:27.258848 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jczfs" Dec 12 18:43:27.259075 kubelet[2724]: E1212 18:43:27.258870 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jczfs" Dec 12 18:43:27.259197 kubelet[2724]: E1212 18:43:27.259162 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jczfs_kube-system(838e6b4b-522c-48a0-a381-d65e2006fa97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jczfs_kube-system(838e6b4b-522c-48a0-a381-d65e2006fa97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a57f5a3602fe6e51f192bfa6f2a1cd61769def400761eb1d6a38738fbbbf32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jczfs" podUID="838e6b4b-522c-48a0-a381-d65e2006fa97" Dec 12 18:43:27.908692 systemd[1]: Created slice kubepods-besteffort-podaf20f1b0_b34b_412e_a0a1_b4c0cada074e.slice - libcontainer container kubepods-besteffort-podaf20f1b0_b34b_412e_a0a1_b4c0cada074e.slice. Dec 12 18:43:27.913297 containerd[1559]: time="2025-12-12T18:43:27.913096316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n46fl,Uid:af20f1b0-b34b-412e-a0a1-b4c0cada074e,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:28.071427 containerd[1559]: time="2025-12-12T18:43:28.071328611Z" level=error msg="Failed to destroy network for sandbox \"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:28.076241 systemd[1]: run-netns-cni\x2d5a4c4843\x2deb53\x2dd459\x2d28fb\x2db2fed699f307.mount: Deactivated successfully. Dec 12 18:43:28.077544 containerd[1559]: time="2025-12-12T18:43:28.077432794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n46fl,Uid:af20f1b0-b34b-412e-a0a1-b4c0cada074e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:28.077809 kubelet[2724]: E1212 18:43:28.077694 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:43:28.077809 kubelet[2724]: E1212 18:43:28.077772 2724 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:28.077809 kubelet[2724]: E1212 18:43:28.077798 2724 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n46fl" Dec 12 18:43:28.079242 kubelet[2724]: E1212 18:43:28.077860 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8a09bc55b17c40e75f69d417d8a2c23f210519a25c870f3c949938b235915a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:31.148999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214355995.mount: Deactivated successfully. Dec 12 18:43:31.176620 containerd[1559]: time="2025-12-12T18:43:31.176576856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:31.177383 containerd[1559]: time="2025-12-12T18:43:31.177288154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:43:31.177912 containerd[1559]: time="2025-12-12T18:43:31.177884389Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:31.179591 containerd[1559]: time="2025-12-12T18:43:31.179566150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:43:31.180430 containerd[1559]: time="2025-12-12T18:43:31.180386900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.123023057s" Dec 12 18:43:31.180641 containerd[1559]: time="2025-12-12T18:43:31.180535763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:43:31.200410 containerd[1559]: time="2025-12-12T18:43:31.200370430Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:43:31.211446 containerd[1559]: time="2025-12-12T18:43:31.210368065Z" level=info msg="Container 1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:31.215269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708681034.mount: Deactivated successfully. Dec 12 18:43:31.220173 containerd[1559]: time="2025-12-12T18:43:31.220140085Z" level=info msg="CreateContainer within sandbox \"e2a71334247e01574d0d11c50394f39c8df8dcce328df20c83f194e981264f22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7\"" Dec 12 18:43:31.220644 containerd[1559]: time="2025-12-12T18:43:31.220615417Z" level=info msg="StartContainer for \"1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7\"" Dec 12 18:43:31.222448 containerd[1559]: time="2025-12-12T18:43:31.222408911Z" level=info msg="connecting to shim 1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7" address="unix:///run/containerd/s/e3ba5a767d989017f35d8504ae605c0777e63a151b1d26807ff62f51cc61b116" protocol=ttrpc version=3 Dec 12 18:43:31.268301 systemd[1]: Started cri-containerd-1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7.scope - libcontainer container 1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7. Dec 12 18:43:31.351718 containerd[1559]: time="2025-12-12T18:43:31.351657251Z" level=info msg="StartContainer for \"1b5f1bd159fbec8016f8637e13ec5d6dfb5565941fc24ff349c1b52e40d412c7\" returns successfully" Dec 12 18:43:31.435273 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:43:31.435363 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:43:31.639473 kubelet[2724]: I1212 18:43:31.639398 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-ca-bundle\") pod \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " Dec 12 18:43:31.639473 kubelet[2724]: I1212 18:43:31.639451 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-275fw\" (UniqueName: \"kubernetes.io/projected/885b0c3a-54d9-486d-a986-7d34c5be0f3c-kube-api-access-275fw\") pod \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " Dec 12 18:43:31.639473 kubelet[2724]: I1212 18:43:31.639475 2724 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-backend-key-pair\") pod \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\" (UID: \"885b0c3a-54d9-486d-a986-7d34c5be0f3c\") " Dec 12 18:43:31.642214 kubelet[2724]: I1212 18:43:31.642074 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "885b0c3a-54d9-486d-a986-7d34c5be0f3c" (UID: "885b0c3a-54d9-486d-a986-7d34c5be0f3c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:43:31.648889 kubelet[2724]: I1212 18:43:31.648830 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "885b0c3a-54d9-486d-a986-7d34c5be0f3c" (UID: "885b0c3a-54d9-486d-a986-7d34c5be0f3c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:43:31.650420 kubelet[2724]: I1212 18:43:31.650392 2724 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885b0c3a-54d9-486d-a986-7d34c5be0f3c-kube-api-access-275fw" (OuterVolumeSpecName: "kube-api-access-275fw") pod "885b0c3a-54d9-486d-a986-7d34c5be0f3c" (UID: "885b0c3a-54d9-486d-a986-7d34c5be0f3c"). InnerVolumeSpecName "kube-api-access-275fw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:43:31.742276 kubelet[2724]: I1212 18:43:31.740695 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-ca-bundle\") on node \"172-239-194-183\" DevicePath \"\"" Dec 12 18:43:31.742276 kubelet[2724]: I1212 18:43:31.740727 2724 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-275fw\" (UniqueName: \"kubernetes.io/projected/885b0c3a-54d9-486d-a986-7d34c5be0f3c-kube-api-access-275fw\") on node \"172-239-194-183\" DevicePath \"\"" Dec 12 18:43:31.742276 kubelet[2724]: I1212 18:43:31.740737 2724 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/885b0c3a-54d9-486d-a986-7d34c5be0f3c-whisker-backend-key-pair\") on node \"172-239-194-183\" DevicePath \"\"" Dec 12 18:43:32.073299 kubelet[2724]: E1212 18:43:32.073040 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:32.081577 systemd[1]: Removed slice kubepods-besteffort-pod885b0c3a_54d9_486d_a986_7d34c5be0f3c.slice - libcontainer container kubepods-besteffort-pod885b0c3a_54d9_486d_a986_7d34c5be0f3c.slice. Dec 12 18:43:32.092389 kubelet[2724]: I1212 18:43:32.092156 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5mwl6" podStartSLOduration=1.639658708 podStartE2EDuration="11.092142388s" podCreationTimestamp="2025-12-12 18:43:21 +0000 UTC" firstStartedPulling="2025-12-12 18:43:21.728888474 +0000 UTC m=+18.945215269" lastFinishedPulling="2025-12-12 18:43:31.181372154 +0000 UTC m=+28.397698949" observedRunningTime="2025-12-12 18:43:32.090723706 +0000 UTC m=+29.307050511" watchObservedRunningTime="2025-12-12 18:43:32.092142388 +0000 UTC m=+29.308469183" Dec 12 18:43:32.153664 systemd[1]: var-lib-kubelet-pods-885b0c3a\x2d54d9\x2d486d\x2da986\x2d7d34c5be0f3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d275fw.mount: Deactivated successfully. Dec 12 18:43:32.154064 systemd[1]: var-lib-kubelet-pods-885b0c3a\x2d54d9\x2d486d\x2da986\x2d7d34c5be0f3c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:43:32.165467 systemd[1]: Created slice kubepods-besteffort-pod387f967a_d27c_485c_aeed_91421d359fb6.slice - libcontainer container kubepods-besteffort-pod387f967a_d27c_485c_aeed_91421d359fb6.slice. Dec 12 18:43:32.244434 kubelet[2724]: I1212 18:43:32.244088 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg97q\" (UniqueName: \"kubernetes.io/projected/387f967a-d27c-485c-aeed-91421d359fb6-kube-api-access-kg97q\") pod \"whisker-675b66b756-wh4s2\" (UID: \"387f967a-d27c-485c-aeed-91421d359fb6\") " pod="calico-system/whisker-675b66b756-wh4s2" Dec 12 18:43:32.244569 kubelet[2724]: I1212 18:43:32.244469 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387f967a-d27c-485c-aeed-91421d359fb6-whisker-ca-bundle\") pod \"whisker-675b66b756-wh4s2\" (UID: \"387f967a-d27c-485c-aeed-91421d359fb6\") " pod="calico-system/whisker-675b66b756-wh4s2" Dec 12 18:43:32.244601 kubelet[2724]: I1212 18:43:32.244561 2724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/387f967a-d27c-485c-aeed-91421d359fb6-whisker-backend-key-pair\") pod \"whisker-675b66b756-wh4s2\" (UID: \"387f967a-d27c-485c-aeed-91421d359fb6\") " pod="calico-system/whisker-675b66b756-wh4s2" Dec 12 18:43:32.470911 containerd[1559]: time="2025-12-12T18:43:32.470874514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675b66b756-wh4s2,Uid:387f967a-d27c-485c-aeed-91421d359fb6,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:32.613613 systemd-networkd[1435]: calie97893e08a0: Link UP Dec 12 18:43:32.615526 systemd-networkd[1435]: calie97893e08a0: Gained carrier Dec 12 18:43:32.640362 containerd[1559]: 2025-12-12 18:43:32.494 [INFO][3803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:43:32.640362 containerd[1559]: 2025-12-12 18:43:32.523 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0 whisker-675b66b756- calico-system 387f967a-d27c-485c-aeed-91421d359fb6 884 0 2025-12-12 18:43:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:675b66b756 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-194-183 whisker-675b66b756-wh4s2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie97893e08a0 [] [] }} ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-" Dec 12 18:43:32.640362 containerd[1559]: 2025-12-12 18:43:32.523 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.640362 containerd[1559]: 2025-12-12 18:43:32.552 [INFO][3815] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" HandleID="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Workload="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.552 [INFO][3815] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" HandleID="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Workload="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-194-183", "pod":"whisker-675b66b756-wh4s2", "timestamp":"2025-12-12 18:43:32.552559577 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.552 [INFO][3815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.552 [INFO][3815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.552 [INFO][3815] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.560 [INFO][3815] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" host="172-239-194-183" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.565 [INFO][3815] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.570 [INFO][3815] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.571 [INFO][3815] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.573 [INFO][3815] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:32.640680 containerd[1559]: 2025-12-12 18:43:32.574 [INFO][3815] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" host="172-239-194-183" Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.579 [INFO][3815] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.585 [INFO][3815] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" host="172-239-194-183" Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.589 [INFO][3815] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.65/26] block=192.168.19.64/26 handle="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" host="172-239-194-183" Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.589 [INFO][3815] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.65/26] handle="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" host="172-239-194-183" Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.589 [INFO][3815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:32.640906 containerd[1559]: 2025-12-12 18:43:32.589 [INFO][3815] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.65/26] IPv6=[] ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" HandleID="k8s-pod-network.c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Workload="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.641027 containerd[1559]: 2025-12-12 18:43:32.595 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0", GenerateName:"whisker-675b66b756-", Namespace:"calico-system", SelfLink:"", UID:"387f967a-d27c-485c-aeed-91421d359fb6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675b66b756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"whisker-675b66b756-wh4s2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie97893e08a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:32.641027 containerd[1559]: 2025-12-12 18:43:32.596 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.65/32] ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.641098 containerd[1559]: 2025-12-12 18:43:32.596 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie97893e08a0 ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.641098 containerd[1559]: 2025-12-12 18:43:32.616 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.642193 containerd[1559]: 2025-12-12 18:43:32.617 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0", GenerateName:"whisker-675b66b756-", Namespace:"calico-system", SelfLink:"", UID:"387f967a-d27c-485c-aeed-91421d359fb6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675b66b756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d", Pod:"whisker-675b66b756-wh4s2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie97893e08a0", MAC:"c2:1d:7a:96:98:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:32.642250 containerd[1559]: 2025-12-12 18:43:32.628 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" Namespace="calico-system" Pod="whisker-675b66b756-wh4s2" WorkloadEndpoint="172--239--194--183-k8s-whisker--675b66b756--wh4s2-eth0" Dec 12 18:43:32.683806 containerd[1559]: time="2025-12-12T18:43:32.683703115Z" level=info msg="connecting to shim c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d" address="unix:///run/containerd/s/a5d695989a240ad21469b18aa62470214a25ed23042de98ccc1a6ceadf359ebe" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:32.717253 systemd[1]: Started cri-containerd-c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d.scope - libcontainer container c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d. Dec 12 18:43:32.797217 containerd[1559]: time="2025-12-12T18:43:32.797141076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675b66b756-wh4s2,Uid:387f967a-d27c-485c-aeed-91421d359fb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c70301276f9c20b53cdc97d9711952e822cbccfd87825f244c606e8af5c5190d\"" Dec 12 18:43:32.800003 containerd[1559]: time="2025-12-12T18:43:32.799983431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:43:32.912506 kubelet[2724]: I1212 18:43:32.911883 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885b0c3a-54d9-486d-a986-7d34c5be0f3c" path="/var/lib/kubelet/pods/885b0c3a-54d9-486d-a986-7d34c5be0f3c/volumes" Dec 12 18:43:32.922776 containerd[1559]: time="2025-12-12T18:43:32.922620194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:32.928285 containerd[1559]: time="2025-12-12T18:43:32.928202691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:43:32.928285 containerd[1559]: time="2025-12-12T18:43:32.928262513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:43:32.928801 kubelet[2724]: E1212 18:43:32.928754 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:43:32.928926 kubelet[2724]: E1212 18:43:32.928907 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:43:32.931405 kubelet[2724]: E1212 18:43:32.931283 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:616f14a12ce949ddb0ea243c4dc4501f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:32.935300 containerd[1559]: time="2025-12-12T18:43:32.935197822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:43:33.066443 containerd[1559]: time="2025-12-12T18:43:33.066278259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:33.067135 containerd[1559]: time="2025-12-12T18:43:33.067083586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:43:33.067190 containerd[1559]: time="2025-12-12T18:43:33.067173698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:43:33.067635 kubelet[2724]: E1212 18:43:33.067588 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:43:33.067776 kubelet[2724]: E1212 18:43:33.067743 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:43:33.068332 kubelet[2724]: E1212 18:43:33.067979 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:33.069896 kubelet[2724]: E1212 18:43:33.069721 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:43:33.077260 kubelet[2724]: I1212 18:43:33.077211 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:43:33.079145 kubelet[2724]: E1212 18:43:33.079090 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:33.081139 kubelet[2724]: E1212 18:43:33.080985 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:43:33.244243 kubelet[2724]: I1212 18:43:33.243053 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:43:33.244243 kubelet[2724]: E1212 18:43:33.243567 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:34.079795 kubelet[2724]: E1212 18:43:34.079737 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:34.083778 kubelet[2724]: E1212 18:43:34.083718 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:43:34.415532 systemd-networkd[1435]: calie97893e08a0: Gained IPv6LL Dec 12 18:43:34.734912 systemd-networkd[1435]: vxlan.calico: Link UP Dec 12 18:43:34.734919 systemd-networkd[1435]: vxlan.calico: Gained carrier Dec 12 18:43:36.077498 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Dec 12 18:43:38.510237 kubelet[2724]: I1212 18:43:38.509900 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:43:38.510904 kubelet[2724]: E1212 18:43:38.510765 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:38.903242 kubelet[2724]: E1212 18:43:38.902694 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:38.903526 containerd[1559]: time="2025-12-12T18:43:38.903265164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-npwn2,Uid:83ec1740-f7bd-4f51-a6be-a16783749dd3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:43:38.905211 containerd[1559]: time="2025-12-12T18:43:38.904781677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jtmwl,Uid:62cd2b62-6bbc-467f-ae85-488760759795,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:39.056067 systemd-networkd[1435]: cali37896e7d3eb: Link UP Dec 12 18:43:39.056731 systemd-networkd[1435]: cali37896e7d3eb: Gained carrier Dec 12 18:43:39.079928 containerd[1559]: 2025-12-12 18:43:38.964 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0 coredns-674b8bbfcf- kube-system 62cd2b62-6bbc-467f-ae85-488760759795 819 0 2025-12-12 18:43:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-194-183 coredns-674b8bbfcf-jtmwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali37896e7d3eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-" Dec 12 18:43:39.079928 containerd[1559]: 2025-12-12 18:43:38.964 [INFO][4154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.079928 containerd[1559]: 2025-12-12 18:43:39.011 [INFO][4180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" HandleID="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.011 [INFO][4180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" HandleID="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-194-183", "pod":"coredns-674b8bbfcf-jtmwl", "timestamp":"2025-12-12 18:43:39.011200979 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.011 [INFO][4180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.011 [INFO][4180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.011 [INFO][4180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.019 [INFO][4180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" host="172-239-194-183" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.023 [INFO][4180] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.027 [INFO][4180] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.029 [INFO][4180] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.032 [INFO][4180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.080219 containerd[1559]: 2025-12-12 18:43:39.032 [INFO][4180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" host="172-239-194-183" Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.033 [INFO][4180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271 Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.038 [INFO][4180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" host="172-239-194-183" Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.66/26] block=192.168.19.64/26 handle="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" host="172-239-194-183" Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.66/26] handle="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" host="172-239-194-183" Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:39.080518 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.66/26] IPv6=[] ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" HandleID="k8s-pod-network.addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.050 [INFO][4154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62cd2b62-6bbc-467f-ae85-488760759795", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"coredns-674b8bbfcf-jtmwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37896e7d3eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.050 [INFO][4154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.66/32] ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.050 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37896e7d3eb ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.058 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.058 [INFO][4154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"62cd2b62-6bbc-467f-ae85-488760759795", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271", Pod:"coredns-674b8bbfcf-jtmwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37896e7d3eb", MAC:"66:b9:2c:40:fc:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:39.080645 containerd[1559]: 2025-12-12 18:43:39.068 [INFO][4154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" Namespace="kube-system" Pod="coredns-674b8bbfcf-jtmwl" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jtmwl-eth0" Dec 12 18:43:39.089825 kubelet[2724]: E1212 18:43:39.089777 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:39.114166 containerd[1559]: time="2025-12-12T18:43:39.113442464Z" level=info msg="connecting to shim addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271" address="unix:///run/containerd/s/2c9699c16909abc1c47b45f76986eb1c036a7ebb23e5ea500d727052311b18b3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:39.158628 systemd[1]: Started cri-containerd-addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271.scope - libcontainer container addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271. Dec 12 18:43:39.186346 systemd-networkd[1435]: cali68d9c44831d: Link UP Dec 12 18:43:39.186727 systemd-networkd[1435]: cali68d9c44831d: Gained carrier Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:38.967 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0 calico-apiserver-6db8fdd69c- calico-apiserver 83ec1740-f7bd-4f51-a6be-a16783749dd3 818 0 2025-12-12 18:43:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db8fdd69c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-194-183 calico-apiserver-6db8fdd69c-npwn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68d9c44831d [] [] }} ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:38.967 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.016 [INFO][4178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" HandleID="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.016 [INFO][4178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" HandleID="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003320b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-194-183", "pod":"calico-apiserver-6db8fdd69c-npwn2", "timestamp":"2025-12-12 18:43:39.016073508 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.016 [INFO][4178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.045 [INFO][4178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.121 [INFO][4178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.133 [INFO][4178] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.140 [INFO][4178] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.143 [INFO][4178] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.148 [INFO][4178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.148 [INFO][4178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.150 [INFO][4178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4 Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.156 [INFO][4178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.176 [INFO][4178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.67/26] block=192.168.19.64/26 handle="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.177 [INFO][4178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.67/26] handle="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" host="172-239-194-183" Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.177 [INFO][4178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:39.209456 containerd[1559]: 2025-12-12 18:43:39.177 [INFO][4178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.67/26] IPv6=[] ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" HandleID="k8s-pod-network.ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.181 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0", GenerateName:"calico-apiserver-6db8fdd69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"83ec1740-f7bd-4f51-a6be-a16783749dd3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fdd69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"calico-apiserver-6db8fdd69c-npwn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68d9c44831d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.181 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.67/32] ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.181 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68d9c44831d ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.186 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.187 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0", GenerateName:"calico-apiserver-6db8fdd69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"83ec1740-f7bd-4f51-a6be-a16783749dd3", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fdd69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4", Pod:"calico-apiserver-6db8fdd69c-npwn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68d9c44831d", MAC:"fe:3c:d5:0d:50:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:39.210408 containerd[1559]: 2025-12-12 18:43:39.201 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-npwn2" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--npwn2-eth0" Dec 12 18:43:39.243445 containerd[1559]: time="2025-12-12T18:43:39.242435700Z" level=info msg="connecting to shim ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4" address="unix:///run/containerd/s/bd309f272f6391175f1b29a954028e37d14038f29e761e88dc52cab9b3e7c451" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:39.281467 containerd[1559]: time="2025-12-12T18:43:39.281413845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jtmwl,Uid:62cd2b62-6bbc-467f-ae85-488760759795,Namespace:kube-system,Attempt:0,} returns sandbox id \"addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271\"" Dec 12 18:43:39.282657 kubelet[2724]: E1212 18:43:39.282619 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:39.288443 containerd[1559]: time="2025-12-12T18:43:39.288392604Z" level=info msg="CreateContainer within sandbox \"addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:39.290273 systemd[1]: Started cri-containerd-ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4.scope - libcontainer container ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4. Dec 12 18:43:39.300853 containerd[1559]: time="2025-12-12T18:43:39.300815862Z" level=info msg="Container d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:39.319008 containerd[1559]: time="2025-12-12T18:43:39.318946680Z" level=info msg="CreateContainer within sandbox \"addca1c2da0936790413449f3539133bdd07ba18c062289287cfafa5d19a2271\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11\"" Dec 12 18:43:39.321376 containerd[1559]: time="2025-12-12T18:43:39.320522592Z" level=info msg="StartContainer for \"d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11\"" Dec 12 18:43:39.321444 containerd[1559]: time="2025-12-12T18:43:39.321383774Z" level=info msg="connecting to shim d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11" address="unix:///run/containerd/s/2c9699c16909abc1c47b45f76986eb1c036a7ebb23e5ea500d727052311b18b3" protocol=ttrpc version=3 Dec 12 18:43:39.357432 systemd[1]: Started cri-containerd-d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11.scope - libcontainer container d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11. Dec 12 18:43:39.398696 containerd[1559]: time="2025-12-12T18:43:39.398664394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-npwn2,Uid:83ec1740-f7bd-4f51-a6be-a16783749dd3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce64e8761b26d779b261d5d6ac571d81add46854e20a76c9c926a57ec669a5e4\"" Dec 12 18:43:39.402332 containerd[1559]: time="2025-12-12T18:43:39.401958621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:43:39.422518 containerd[1559]: time="2025-12-12T18:43:39.422020447Z" level=info msg="StartContainer for \"d9f5ae280b7b89f426ab1422506b4d7bac135e3b487a7684b529db3828ea3b11\" returns successfully" Dec 12 18:43:39.541078 containerd[1559]: time="2025-12-12T18:43:39.541017671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:39.542306 containerd[1559]: time="2025-12-12T18:43:39.542209317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:43:39.542306 containerd[1559]: time="2025-12-12T18:43:39.542248648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:43:39.542525 kubelet[2724]: E1212 18:43:39.542495 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:39.542909 kubelet[2724]: E1212 18:43:39.542861 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:39.543223 kubelet[2724]: E1212 18:43:39.543000 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxf6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:39.544417 kubelet[2724]: E1212 18:43:39.544351 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:43:39.903162 kubelet[2724]: E1212 18:43:39.902813 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:39.903613 containerd[1559]: time="2025-12-12T18:43:39.903570211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jczfs,Uid:838e6b4b-522c-48a0-a381-d65e2006fa97,Namespace:kube-system,Attempt:0,}" Dec 12 18:43:39.904688 containerd[1559]: time="2025-12-12T18:43:39.904638486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rmclv,Uid:0ba47eaa-f04d-4e71-87de-91abc04e7d96,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:39.905339 containerd[1559]: time="2025-12-12T18:43:39.904797178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d85bd9b7-nqzx2,Uid:0dd6b358-9691-4ff7-9c07-9faa3b6a5832,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:40.064040 systemd-networkd[1435]: cali711d4f1c256: Link UP Dec 12 18:43:40.067304 systemd-networkd[1435]: cali711d4f1c256: Gained carrier Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:39.961 [INFO][4343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0 calico-kube-controllers-8d85bd9b7- calico-system 0dd6b358-9691-4ff7-9c07-9faa3b6a5832 810 0 2025-12-12 18:43:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8d85bd9b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-194-183 calico-kube-controllers-8d85bd9b7-nqzx2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali711d4f1c256 [] [] }} ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:39.961 [INFO][4343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.019 [INFO][4378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" HandleID="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Workload="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.019 [INFO][4378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" HandleID="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Workload="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5df0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-194-183", "pod":"calico-kube-controllers-8d85bd9b7-nqzx2", "timestamp":"2025-12-12 18:43:40.019040697 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.019 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.019 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.019 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.025 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.031 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.039 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.041 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.043 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.043 [INFO][4378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.045 [INFO][4378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9 Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.050 [INFO][4378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.055 [INFO][4378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.68/26] block=192.168.19.64/26 handle="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.056 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.68/26] handle="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" host="172-239-194-183" Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.056 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:40.083864 containerd[1559]: 2025-12-12 18:43:40.056 [INFO][4378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.68/26] IPv6=[] ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" HandleID="k8s-pod-network.ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Workload="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.059 [INFO][4343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0", GenerateName:"calico-kube-controllers-8d85bd9b7-", Namespace:"calico-system", SelfLink:"", UID:"0dd6b358-9691-4ff7-9c07-9faa3b6a5832", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d85bd9b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"calico-kube-controllers-8d85bd9b7-nqzx2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali711d4f1c256", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.059 [INFO][4343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.68/32] ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.059 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali711d4f1c256 ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.065 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.066 [INFO][4343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0", GenerateName:"calico-kube-controllers-8d85bd9b7-", Namespace:"calico-system", SelfLink:"", UID:"0dd6b358-9691-4ff7-9c07-9faa3b6a5832", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8d85bd9b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9", Pod:"calico-kube-controllers-8d85bd9b7-nqzx2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali711d4f1c256", MAC:"3e:f6:69:7a:2f:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.084458 containerd[1559]: 2025-12-12 18:43:40.080 [INFO][4343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" Namespace="calico-system" Pod="calico-kube-controllers-8d85bd9b7-nqzx2" WorkloadEndpoint="172--239--194--183-k8s-calico--kube--controllers--8d85bd9b7--nqzx2-eth0" Dec 12 18:43:40.094353 kubelet[2724]: E1212 18:43:40.093902 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:40.102088 kubelet[2724]: E1212 18:43:40.101996 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:43:40.120465 containerd[1559]: time="2025-12-12T18:43:40.120342512Z" level=info msg="connecting to shim ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9" address="unix:///run/containerd/s/e225044a9431d09846e45adca3e073481f7616d07dfc32ac5a15b9f539985dd7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:40.138619 kubelet[2724]: I1212 18:43:40.138407 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jtmwl" podStartSLOduration=32.138362262 podStartE2EDuration="32.138362262s" podCreationTimestamp="2025-12-12 18:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:40.116172547 +0000 UTC m=+37.332499342" watchObservedRunningTime="2025-12-12 18:43:40.138362262 +0000 UTC m=+37.354689057" Dec 12 18:43:40.168550 systemd[1]: Started cri-containerd-ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9.scope - libcontainer container ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9. Dec 12 18:43:40.209258 systemd-networkd[1435]: cali67e9e0b9467: Link UP Dec 12 18:43:40.210282 systemd-networkd[1435]: cali67e9e0b9467: Gained carrier Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:39.988 [INFO][4349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0 goldmane-666569f655- calico-system 0ba47eaa-f04d-4e71-87de-91abc04e7d96 820 0 2025-12-12 18:43:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-194-183 goldmane-666569f655-rmclv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali67e9e0b9467 [] [] }} ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:39.988 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.045 [INFO][4385] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" HandleID="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Workload="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.047 [INFO][4385] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" HandleID="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Workload="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000370930), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-194-183", "pod":"goldmane-666569f655-rmclv", "timestamp":"2025-12-12 18:43:40.04562803 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.047 [INFO][4385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.056 [INFO][4385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.056 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.131 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.152 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.175 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.179 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.183 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.183 [INFO][4385] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.186 [INFO][4385] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098 Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.193 [INFO][4385] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4385] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.69/26] block=192.168.19.64/26 handle="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.69/26] handle="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" host="172-239-194-183" Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:40.224349 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4385] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.69/26] IPv6=[] ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" HandleID="k8s-pod-network.aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Workload="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.205 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0ba47eaa-f04d-4e71-87de-91abc04e7d96", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"goldmane-666569f655-rmclv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67e9e0b9467", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.205 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.69/32] ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.205 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67e9e0b9467 ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.210 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.211 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0ba47eaa-f04d-4e71-87de-91abc04e7d96", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098", Pod:"goldmane-666569f655-rmclv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67e9e0b9467", MAC:"3a:dd:74:86:75:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.224908 containerd[1559]: 2025-12-12 18:43:40.220 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" Namespace="calico-system" Pod="goldmane-666569f655-rmclv" WorkloadEndpoint="172--239--194--183-k8s-goldmane--666569f655--rmclv-eth0" Dec 12 18:43:40.254371 containerd[1559]: time="2025-12-12T18:43:40.254329732Z" level=info msg="connecting to shim aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098" address="unix:///run/containerd/s/163d7c06ceeae15788ac03d3261c30b0614109aae8ecc3068ee84d95635f9bff" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:40.298369 systemd[1]: Started cri-containerd-aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098.scope - libcontainer container aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098. Dec 12 18:43:40.307455 systemd-networkd[1435]: cali3462f65aa43: Link UP Dec 12 18:43:40.310723 systemd-networkd[1435]: cali3462f65aa43: Gained carrier Dec 12 18:43:40.315982 containerd[1559]: time="2025-12-12T18:43:40.315939980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8d85bd9b7-nqzx2,Uid:0dd6b358-9691-4ff7-9c07-9faa3b6a5832,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab2e9f7985e1037464d3ea8edd4bee472a9d265f6310d34aaf1b02464dfcc1a9\"" Dec 12 18:43:40.325714 containerd[1559]: time="2025-12-12T18:43:40.325285004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.004 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0 coredns-674b8bbfcf- kube-system 838e6b4b-522c-48a0-a381-d65e2006fa97 822 0 2025-12-12 18:43:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-194-183 coredns-674b8bbfcf-jczfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3462f65aa43 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.005 [INFO][4344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.057 [INFO][4390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" HandleID="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.058 [INFO][4390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" HandleID="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-194-183", "pod":"coredns-674b8bbfcf-jczfs", "timestamp":"2025-12-12 18:43:40.057401996 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.059 [INFO][4390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.198 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.229 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.252 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.266 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.268 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.271 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.271 [INFO][4390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.273 [INFO][4390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309 Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.279 [INFO][4390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.295 [INFO][4390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.70/26] block=192.168.19.64/26 handle="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.298 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.70/26] handle="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" host="172-239-194-183" Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.298 [INFO][4390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:40.334990 containerd[1559]: 2025-12-12 18:43:40.298 [INFO][4390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.70/26] IPv6=[] ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" HandleID="k8s-pod-network.dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Workload="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.303 [INFO][4344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"838e6b4b-522c-48a0-a381-d65e2006fa97", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"coredns-674b8bbfcf-jczfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3462f65aa43", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.303 [INFO][4344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.70/32] ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.303 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3462f65aa43 ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.311 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.311 [INFO][4344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"838e6b4b-522c-48a0-a381-d65e2006fa97", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309", Pod:"coredns-674b8bbfcf-jczfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3462f65aa43", MAC:"a6:a9:28:4d:80:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:40.335808 containerd[1559]: 2025-12-12 18:43:40.328 [INFO][4344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" Namespace="kube-system" Pod="coredns-674b8bbfcf-jczfs" WorkloadEndpoint="172--239--194--183-k8s-coredns--674b8bbfcf--jczfs-eth0" Dec 12 18:43:40.371859 containerd[1559]: time="2025-12-12T18:43:40.371782722Z" level=info msg="connecting to shim dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309" address="unix:///run/containerd/s/eb139393b6faf72080eb33dcbbe6afc711c51889b6573f4db46297138003bd79" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:40.402490 systemd[1]: Started cri-containerd-dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309.scope - libcontainer container dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309. Dec 12 18:43:40.405947 containerd[1559]: time="2025-12-12T18:43:40.405394028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rmclv,Uid:0ba47eaa-f04d-4e71-87de-91abc04e7d96,Namespace:calico-system,Attempt:0,} returns sandbox id \"aad9de4775cb4030f889eedfad0fc4f21edc6f7a0e1b475c592867067b5e9098\"" Dec 12 18:43:40.456260 containerd[1559]: time="2025-12-12T18:43:40.456227173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:40.457231 containerd[1559]: time="2025-12-12T18:43:40.457203586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:43:40.457301 containerd[1559]: time="2025-12-12T18:43:40.457281267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:43:40.457671 kubelet[2724]: E1212 18:43:40.457632 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:43:40.457732 kubelet[2724]: E1212 18:43:40.457690 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:43:40.460049 containerd[1559]: time="2025-12-12T18:43:40.460026924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:43:40.460443 kubelet[2724]: E1212 18:43:40.460032 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:40.462979 kubelet[2724]: E1212 18:43:40.462932 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:43:40.468145 containerd[1559]: time="2025-12-12T18:43:40.466840984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jczfs,Uid:838e6b4b-522c-48a0-a381-d65e2006fa97,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309\"" Dec 12 18:43:40.469001 kubelet[2724]: E1212 18:43:40.468970 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:40.472807 containerd[1559]: time="2025-12-12T18:43:40.472784073Z" level=info msg="CreateContainer within sandbox \"dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:43:40.483543 containerd[1559]: time="2025-12-12T18:43:40.483509685Z" level=info msg="Container 3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:43:40.489443 containerd[1559]: time="2025-12-12T18:43:40.489421464Z" level=info msg="CreateContainer within sandbox \"dff07e7dabddc3d500140fb77673c8bdf442dba68019943911054622b1b75309\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6\"" Dec 12 18:43:40.490955 containerd[1559]: time="2025-12-12T18:43:40.490933575Z" level=info msg="StartContainer for \"3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6\"" Dec 12 18:43:40.492525 containerd[1559]: time="2025-12-12T18:43:40.492469404Z" level=info msg="connecting to shim 3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6" address="unix:///run/containerd/s/eb139393b6faf72080eb33dcbbe6afc711c51889b6573f4db46297138003bd79" protocol=ttrpc version=3 Dec 12 18:43:40.493442 systemd-networkd[1435]: cali68d9c44831d: Gained IPv6LL Dec 12 18:43:40.514221 systemd[1]: Started cri-containerd-3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6.scope - libcontainer container 3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6. Dec 12 18:43:40.549300 containerd[1559]: time="2025-12-12T18:43:40.549246928Z" level=info msg="StartContainer for \"3729b554f93447ddeb379ec0c46e9e0e4b350ae5486325f6a8bcd093bf35b6f6\" returns successfully" Dec 12 18:43:40.599764 containerd[1559]: time="2025-12-12T18:43:40.599660028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:40.600667 containerd[1559]: time="2025-12-12T18:43:40.600641741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:43:40.600827 containerd[1559]: time="2025-12-12T18:43:40.600704542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:43:40.601029 kubelet[2724]: E1212 18:43:40.600966 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:43:40.601029 kubelet[2724]: E1212 18:43:40.601009 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:43:40.601429 kubelet[2724]: E1212 18:43:40.601318 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8hmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:40.602793 kubelet[2724]: E1212 18:43:40.602764 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:43:40.877512 systemd-networkd[1435]: cali37896e7d3eb: Gained IPv6LL Dec 12 18:43:41.101538 kubelet[2724]: E1212 18:43:41.101503 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:43:41.104667 kubelet[2724]: E1212 18:43:41.104636 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:41.107249 kubelet[2724]: E1212 18:43:41.107225 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:41.108005 kubelet[2724]: E1212 18:43:41.107977 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:43:41.108091 kubelet[2724]: E1212 18:43:41.108064 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:43:41.127039 kubelet[2724]: I1212 18:43:41.126994 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jczfs" podStartSLOduration=33.126982729 podStartE2EDuration="33.126982729s" podCreationTimestamp="2025-12-12 18:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:43:41.126275789 +0000 UTC m=+38.342602584" watchObservedRunningTime="2025-12-12 18:43:41.126982729 +0000 UTC m=+38.343309524" Dec 12 18:43:41.453247 systemd-networkd[1435]: cali3462f65aa43: Gained IPv6LL Dec 12 18:43:41.773342 systemd-networkd[1435]: cali67e9e0b9467: Gained IPv6LL Dec 12 18:43:41.901949 systemd-networkd[1435]: cali711d4f1c256: Gained IPv6LL Dec 12 18:43:42.110245 kubelet[2724]: E1212 18:43:42.110014 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:42.112146 kubelet[2724]: E1212 18:43:42.111850 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:42.112146 kubelet[2724]: E1212 18:43:42.112028 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:43:42.112146 kubelet[2724]: E1212 18:43:42.112090 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:43:42.905057 containerd[1559]: time="2025-12-12T18:43:42.904364962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-9p2sj,Uid:307920b5-5337-43c8-8c09-6d8750b41212,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:43:42.905057 containerd[1559]: time="2025-12-12T18:43:42.904414763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n46fl,Uid:af20f1b0-b34b-412e-a0a1-b4c0cada074e,Namespace:calico-system,Attempt:0,}" Dec 12 18:43:43.079162 systemd-networkd[1435]: cali943555d10f0: Link UP Dec 12 18:43:43.081634 systemd-networkd[1435]: cali943555d10f0: Gained carrier Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:42.975 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-csi--node--driver--n46fl-eth0 csi-node-driver- calico-system af20f1b0-b34b-412e-a0a1-b4c0cada074e 718 0 2025-12-12 18:43:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-194-183 csi-node-driver-n46fl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali943555d10f0 [] [] }} ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:42.977 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.031 [INFO][4634] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" HandleID="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Workload="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.031 [INFO][4634] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" HandleID="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Workload="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd20), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-194-183", "pod":"csi-node-driver-n46fl", "timestamp":"2025-12-12 18:43:43.031691479 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4634] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.038 [INFO][4634] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.042 [INFO][4634] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.046 [INFO][4634] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.049 [INFO][4634] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.053 [INFO][4634] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.053 [INFO][4634] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.054 [INFO][4634] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82 Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.057 [INFO][4634] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.062 [INFO][4634] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.71/26] block=192.168.19.64/26 handle="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.062 [INFO][4634] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.71/26] handle="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" host="172-239-194-183" Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.062 [INFO][4634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:43.097706 containerd[1559]: 2025-12-12 18:43:43.063 [INFO][4634] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.71/26] IPv6=[] ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" HandleID="k8s-pod-network.b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Workload="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.068 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-csi--node--driver--n46fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af20f1b0-b34b-412e-a0a1-b4c0cada074e", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"csi-node-driver-n46fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali943555d10f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.069 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.71/32] ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.069 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali943555d10f0 ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.084 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.086 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-csi--node--driver--n46fl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af20f1b0-b34b-412e-a0a1-b4c0cada074e", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82", Pod:"csi-node-driver-n46fl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali943555d10f0", MAC:"de:08:de:a6:c7:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:43.099916 containerd[1559]: 2025-12-12 18:43:43.096 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" Namespace="calico-system" Pod="csi-node-driver-n46fl" WorkloadEndpoint="172--239--194--183-k8s-csi--node--driver--n46fl-eth0" Dec 12 18:43:43.111591 kubelet[2724]: E1212 18:43:43.111543 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:43:43.126537 containerd[1559]: time="2025-12-12T18:43:43.126498550Z" level=info msg="connecting to shim b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82" address="unix:///run/containerd/s/cff5174bce190d0f0297d06a853ad982332e40bc998b09210af87abf7462b8bb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:43.164291 systemd[1]: Started cri-containerd-b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82.scope - libcontainer container b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82. Dec 12 18:43:43.208662 systemd-networkd[1435]: cali31ae1ad4477: Link UP Dec 12 18:43:43.209852 systemd-networkd[1435]: cali31ae1ad4477: Gained carrier Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:42.980 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0 calico-apiserver-6db8fdd69c- calico-apiserver 307920b5-5337-43c8-8c09-6d8750b41212 816 0 2025-12-12 18:43:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db8fdd69c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-194-183 calico-apiserver-6db8fdd69c-9p2sj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31ae1ad4477 [] [] }} ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:42.980 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" HandleID="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" HandleID="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-194-183", "pod":"calico-apiserver-6db8fdd69c-9p2sj", "timestamp":"2025-12-12 18:43:43.032599609 +0000 UTC"}, Hostname:"172-239-194-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.032 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.062 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.062 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-194-183' Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.145 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.156 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.168 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.172 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.175 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.64/26 host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.176 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.64/26 handle="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.179 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2 Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.189 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.64/26 handle="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.197 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.72/26] block=192.168.19.64/26 handle="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.197 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.72/26] handle="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" host="172-239-194-183" Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.198 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:43:43.240717 containerd[1559]: 2025-12-12 18:43:43.198 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.72/26] IPv6=[] ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" HandleID="k8s-pod-network.e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Workload="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.204 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0", GenerateName:"calico-apiserver-6db8fdd69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"307920b5-5337-43c8-8c09-6d8750b41212", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fdd69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"", Pod:"calico-apiserver-6db8fdd69c-9p2sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31ae1ad4477", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.205 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.72/32] ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.205 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31ae1ad4477 ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.210 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.212 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0", GenerateName:"calico-apiserver-6db8fdd69c-", Namespace:"calico-apiserver", SelfLink:"", UID:"307920b5-5337-43c8-8c09-6d8750b41212", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db8fdd69c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-194-183", ContainerID:"e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2", Pod:"calico-apiserver-6db8fdd69c-9p2sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31ae1ad4477", MAC:"86:6e:bc:71:b8:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:43:43.241794 containerd[1559]: 2025-12-12 18:43:43.229 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" Namespace="calico-apiserver" Pod="calico-apiserver-6db8fdd69c-9p2sj" WorkloadEndpoint="172--239--194--183-k8s-calico--apiserver--6db8fdd69c--9p2sj-eth0" Dec 12 18:43:43.274123 containerd[1559]: time="2025-12-12T18:43:43.273455022Z" level=info msg="connecting to shim e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2" address="unix:///run/containerd/s/379485e315a4f5b63cdfd370c36c7f2daf9bb04f5bdb1a2d5c46943cef1ca02a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:43:43.296520 containerd[1559]: time="2025-12-12T18:43:43.296482040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n46fl,Uid:af20f1b0-b34b-412e-a0a1-b4c0cada074e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8a6ed14283d19455d5b9d737c4b27ea314123253da16500ea2639be36c92f82\"" Dec 12 18:43:43.300935 containerd[1559]: time="2025-12-12T18:43:43.300663795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:43:43.343225 systemd[1]: Started cri-containerd-e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2.scope - libcontainer container e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2. Dec 12 18:43:43.467999 containerd[1559]: time="2025-12-12T18:43:43.467969307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:43.469124 containerd[1559]: time="2025-12-12T18:43:43.469087388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:43:43.469557 containerd[1559]: time="2025-12-12T18:43:43.469208159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:43:43.471269 kubelet[2724]: E1212 18:43:43.471228 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:43:43.471331 kubelet[2724]: E1212 18:43:43.471279 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:43:43.471615 kubelet[2724]: E1212 18:43:43.471392 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:43.473509 containerd[1559]: time="2025-12-12T18:43:43.473492156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:43:43.503099 containerd[1559]: time="2025-12-12T18:43:43.503040185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db8fdd69c-9p2sj,Uid:307920b5-5337-43c8-8c09-6d8750b41212,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e418f420a60bc045245822e992c86606b062b17f8ab81c93714ca5da373733f2\"" Dec 12 18:43:43.628324 containerd[1559]: time="2025-12-12T18:43:43.628265572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:43.629355 containerd[1559]: time="2025-12-12T18:43:43.629325034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:43:43.629417 containerd[1559]: time="2025-12-12T18:43:43.629397505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:43:43.629625 kubelet[2724]: E1212 18:43:43.629592 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:43:43.629686 kubelet[2724]: E1212 18:43:43.629633 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:43:43.629968 containerd[1559]: time="2025-12-12T18:43:43.629944991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:43:43.630374 kubelet[2724]: E1212 18:43:43.630174 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:43.631638 kubelet[2724]: E1212 18:43:43.631491 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:43.781669 containerd[1559]: time="2025-12-12T18:43:43.781449972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:43.782856 containerd[1559]: time="2025-12-12T18:43:43.782825557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:43:43.782992 containerd[1559]: time="2025-12-12T18:43:43.782909808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:43:43.784126 kubelet[2724]: E1212 18:43:43.783276 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:43.784126 kubelet[2724]: E1212 18:43:43.783327 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:43.784126 kubelet[2724]: E1212 18:43:43.783446 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c6mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:43.784789 kubelet[2724]: E1212 18:43:43.784752 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:43:44.115249 kubelet[2724]: E1212 18:43:44.113904 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:43:44.117813 kubelet[2724]: E1212 18:43:44.117726 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:44.653256 systemd-networkd[1435]: cali943555d10f0: Gained IPv6LL Dec 12 18:43:44.717248 systemd-networkd[1435]: cali31ae1ad4477: Gained IPv6LL Dec 12 18:43:45.121972 kubelet[2724]: E1212 18:43:45.121923 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:45.123125 kubelet[2724]: E1212 18:43:45.122446 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:43:46.906133 containerd[1559]: time="2025-12-12T18:43:46.905390853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:43:47.033719 containerd[1559]: time="2025-12-12T18:43:47.033671480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:47.034871 containerd[1559]: time="2025-12-12T18:43:47.034829619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:43:47.034972 containerd[1559]: time="2025-12-12T18:43:47.034910760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:43:47.035378 kubelet[2724]: E1212 18:43:47.035257 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:43:47.035378 kubelet[2724]: E1212 18:43:47.035345 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:43:47.036382 kubelet[2724]: E1212 18:43:47.036222 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:616f14a12ce949ddb0ea243c4dc4501f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:47.038305 containerd[1559]: time="2025-12-12T18:43:47.038193436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:43:47.170312 containerd[1559]: time="2025-12-12T18:43:47.170133233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:47.171334 containerd[1559]: time="2025-12-12T18:43:47.171254072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:43:47.171518 containerd[1559]: time="2025-12-12T18:43:47.171303203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:43:47.171709 kubelet[2724]: E1212 18:43:47.171650 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:43:47.171990 kubelet[2724]: E1212 18:43:47.171722 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:43:47.172219 kubelet[2724]: E1212 18:43:47.172083 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:47.173622 kubelet[2724]: E1212 18:43:47.173539 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:43:53.904254 containerd[1559]: time="2025-12-12T18:43:53.903945721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:43:54.130131 containerd[1559]: time="2025-12-12T18:43:54.129934360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:54.132290 containerd[1559]: time="2025-12-12T18:43:54.132170089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:43:54.132607 kubelet[2724]: E1212 18:43:54.132546 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:43:54.134525 kubelet[2724]: E1212 18:43:54.132616 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:43:54.134525 kubelet[2724]: E1212 18:43:54.132750 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8hmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:54.134525 kubelet[2724]: E1212 18:43:54.134309 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:43:54.135076 containerd[1559]: time="2025-12-12T18:43:54.132253160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:43:54.906210 containerd[1559]: time="2025-12-12T18:43:54.906136773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:43:55.031398 containerd[1559]: time="2025-12-12T18:43:55.031045441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:55.032547 containerd[1559]: time="2025-12-12T18:43:55.032495157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:43:55.032711 containerd[1559]: time="2025-12-12T18:43:55.032644538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:43:55.032967 kubelet[2724]: E1212 18:43:55.032896 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:55.032967 kubelet[2724]: E1212 18:43:55.032939 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:43:55.033534 kubelet[2724]: E1212 18:43:55.033476 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxf6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:55.034788 kubelet[2724]: E1212 18:43:55.034753 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:43:55.904970 containerd[1559]: time="2025-12-12T18:43:55.904856146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:43:56.031806 containerd[1559]: time="2025-12-12T18:43:56.031746917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:56.033944 containerd[1559]: time="2025-12-12T18:43:56.033859826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:43:56.034821 containerd[1559]: time="2025-12-12T18:43:56.033877666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:43:56.035753 kubelet[2724]: E1212 18:43:56.035562 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:43:56.037798 kubelet[2724]: E1212 18:43:56.035848 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:43:56.038436 kubelet[2724]: E1212 18:43:56.038229 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:56.040018 kubelet[2724]: E1212 18:43:56.039977 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:43:56.043483 containerd[1559]: time="2025-12-12T18:43:56.043279504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:43:56.179143 containerd[1559]: time="2025-12-12T18:43:56.178554887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:56.180387 containerd[1559]: time="2025-12-12T18:43:56.180261523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:43:56.180642 containerd[1559]: time="2025-12-12T18:43:56.180482535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:43:56.181140 kubelet[2724]: E1212 18:43:56.180925 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:43:56.181280 kubelet[2724]: E1212 18:43:56.181253 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:43:56.181869 kubelet[2724]: E1212 18:43:56.181703 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:56.184322 containerd[1559]: time="2025-12-12T18:43:56.184295240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:43:56.316848 containerd[1559]: time="2025-12-12T18:43:56.316727321Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:43:56.318038 containerd[1559]: time="2025-12-12T18:43:56.317923896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:43:56.318038 containerd[1559]: time="2025-12-12T18:43:56.317979177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:43:56.318463 kubelet[2724]: E1212 18:43:56.318374 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:43:56.318635 kubelet[2724]: E1212 18:43:56.318596 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:43:56.318994 kubelet[2724]: E1212 18:43:56.318850 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:43:56.320368 kubelet[2724]: E1212 18:43:56.320320 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:43:58.904629 kubelet[2724]: E1212 18:43:58.904470 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:43:59.903673 containerd[1559]: time="2025-12-12T18:43:59.903603253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:44:00.038451 containerd[1559]: time="2025-12-12T18:44:00.038372982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:00.040133 containerd[1559]: time="2025-12-12T18:44:00.039939248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:44:00.040133 containerd[1559]: time="2025-12-12T18:44:00.040012438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:00.040291 kubelet[2724]: E1212 18:44:00.040243 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:00.041179 kubelet[2724]: E1212 18:44:00.040316 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:00.041179 kubelet[2724]: E1212 18:44:00.040505 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c6mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:00.041725 kubelet[2724]: E1212 18:44:00.041652 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:44:06.906333 kubelet[2724]: E1212 18:44:06.906229 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:44:07.903787 kubelet[2724]: E1212 18:44:07.903667 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:44:08.907377 kubelet[2724]: E1212 18:44:08.907270 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:44:09.904290 kubelet[2724]: E1212 18:44:09.904236 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:44:12.907031 containerd[1559]: time="2025-12-12T18:44:12.906839388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:44:13.038059 containerd[1559]: time="2025-12-12T18:44:13.037952464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:13.039170 containerd[1559]: time="2025-12-12T18:44:13.039072702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:44:13.039229 containerd[1559]: time="2025-12-12T18:44:13.039208824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:44:13.039576 kubelet[2724]: E1212 18:44:13.039522 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:13.039921 kubelet[2724]: E1212 18:44:13.039585 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:13.039921 kubelet[2724]: E1212 18:44:13.039772 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:616f14a12ce949ddb0ea243c4dc4501f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:13.042215 containerd[1559]: time="2025-12-12T18:44:13.042179093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:44:13.181936 containerd[1559]: time="2025-12-12T18:44:13.181774162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:13.183142 containerd[1559]: time="2025-12-12T18:44:13.182989201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:44:13.183142 containerd[1559]: time="2025-12-12T18:44:13.183056792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:13.183387 kubelet[2724]: E1212 18:44:13.183353 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:13.183521 kubelet[2724]: E1212 18:44:13.183484 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:13.184270 kubelet[2724]: E1212 18:44:13.183623 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:13.185750 kubelet[2724]: E1212 18:44:13.185664 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:44:14.904132 kubelet[2724]: E1212 18:44:14.903685 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:44:19.905884 containerd[1559]: time="2025-12-12T18:44:19.905648476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:44:20.047755 containerd[1559]: time="2025-12-12T18:44:20.047672399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:20.049131 containerd[1559]: time="2025-12-12T18:44:20.049015327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:44:20.049236 containerd[1559]: time="2025-12-12T18:44:20.049100918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:20.049417 kubelet[2724]: E1212 18:44:20.049362 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:20.050414 kubelet[2724]: E1212 18:44:20.049435 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:20.050414 kubelet[2724]: E1212 18:44:20.049769 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxf6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:20.050530 containerd[1559]: time="2025-12-12T18:44:20.050218113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:44:20.051201 kubelet[2724]: E1212 18:44:20.050961 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:44:20.178443 containerd[1559]: time="2025-12-12T18:44:20.178270712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:20.179495 containerd[1559]: time="2025-12-12T18:44:20.179405088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:44:20.179495 containerd[1559]: time="2025-12-12T18:44:20.179444628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:20.180451 kubelet[2724]: E1212 18:44:20.180402 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:44:20.180507 kubelet[2724]: E1212 18:44:20.180482 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:44:20.180789 kubelet[2724]: E1212 18:44:20.180706 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8hmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:20.181942 kubelet[2724]: E1212 18:44:20.181918 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:44:20.907741 containerd[1559]: time="2025-12-12T18:44:20.907492393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:44:21.047375 containerd[1559]: time="2025-12-12T18:44:21.047328141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:21.048574 containerd[1559]: time="2025-12-12T18:44:21.048527887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:44:21.048706 containerd[1559]: time="2025-12-12T18:44:21.048549987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:44:21.048885 kubelet[2724]: E1212 18:44:21.048823 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:21.048885 kubelet[2724]: E1212 18:44:21.048865 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:44:21.049273 kubelet[2724]: E1212 18:44:21.048968 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:21.052172 containerd[1559]: time="2025-12-12T18:44:21.052077252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:44:21.196376 containerd[1559]: time="2025-12-12T18:44:21.195081956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:21.198976 containerd[1559]: time="2025-12-12T18:44:21.198469230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:44:21.198976 containerd[1559]: time="2025-12-12T18:44:21.198622462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:44:21.199374 kubelet[2724]: E1212 18:44:21.199009 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:21.200792 kubelet[2724]: E1212 18:44:21.199391 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:44:21.200792 kubelet[2724]: E1212 18:44:21.200532 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:21.201809 kubelet[2724]: E1212 18:44:21.201768 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:44:24.905734 containerd[1559]: time="2025-12-12T18:44:24.905478217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:44:25.042511 containerd[1559]: time="2025-12-12T18:44:25.042453832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:25.043415 containerd[1559]: time="2025-12-12T18:44:25.043381692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:44:25.043486 containerd[1559]: time="2025-12-12T18:44:25.043459653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:25.043686 kubelet[2724]: E1212 18:44:25.043624 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:44:25.044220 kubelet[2724]: E1212 18:44:25.043696 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:44:25.044220 kubelet[2724]: E1212 18:44:25.043812 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:25.045394 kubelet[2724]: E1212 18:44:25.045362 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:44:25.901924 kubelet[2724]: E1212 18:44:25.901831 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:27.904726 kubelet[2724]: E1212 18:44:27.904397 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:44:29.904529 containerd[1559]: time="2025-12-12T18:44:29.904435607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:44:30.030737 containerd[1559]: time="2025-12-12T18:44:30.030672660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:30.031701 containerd[1559]: time="2025-12-12T18:44:30.031644560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:44:30.031739 containerd[1559]: time="2025-12-12T18:44:30.031722991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:44:30.032281 kubelet[2724]: E1212 18:44:30.031943 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:30.033409 kubelet[2724]: E1212 18:44:30.032631 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:44:30.034338 kubelet[2724]: E1212 18:44:30.034299 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c6mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:30.035509 kubelet[2724]: E1212 18:44:30.035475 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:44:33.903338 kubelet[2724]: E1212 18:44:33.902982 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:34.908801 kubelet[2724]: E1212 18:44:34.908757 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:44:34.910120 kubelet[2724]: E1212 18:44:34.910067 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:44:34.910400 kubelet[2724]: E1212 18:44:34.910375 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:44:37.902645 kubelet[2724]: E1212 18:44:37.902593 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:38.904142 kubelet[2724]: E1212 18:44:38.903925 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:38.907077 kubelet[2724]: E1212 18:44:38.906862 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:44:41.904985 kubelet[2724]: E1212 18:44:41.903780 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:41.908147 kubelet[2724]: E1212 18:44:41.908060 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:44:42.906610 kubelet[2724]: E1212 18:44:42.906448 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:44:47.906038 kubelet[2724]: E1212 18:44:47.905525 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:44:47.906845 kubelet[2724]: E1212 18:44:47.906782 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:44:49.906139 kubelet[2724]: E1212 18:44:49.904407 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:44:51.904427 kubelet[2724]: E1212 18:44:51.904349 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:44:55.903909 kubelet[2724]: E1212 18:44:55.903692 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:55.906210 containerd[1559]: time="2025-12-12T18:44:55.906167808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:44:56.234073 containerd[1559]: time="2025-12-12T18:44:56.233995000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:56.235289 containerd[1559]: time="2025-12-12T18:44:56.235177415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:44:56.235398 containerd[1559]: time="2025-12-12T18:44:56.235260246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:44:56.235484 kubelet[2724]: E1212 18:44:56.235450 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:56.235566 kubelet[2724]: E1212 18:44:56.235503 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:44:56.235653 kubelet[2724]: E1212 18:44:56.235617 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:616f14a12ce949ddb0ea243c4dc4501f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:56.238250 containerd[1559]: time="2025-12-12T18:44:56.238177170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:44:56.367124 containerd[1559]: time="2025-12-12T18:44:56.366957977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:44:56.368404 containerd[1559]: time="2025-12-12T18:44:56.368237582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:44:56.368404 containerd[1559]: time="2025-12-12T18:44:56.368254352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:44:56.368828 kubelet[2724]: E1212 18:44:56.368751 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:56.368828 kubelet[2724]: E1212 18:44:56.368826 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:44:56.369131 kubelet[2724]: E1212 18:44:56.369048 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:44:56.370314 kubelet[2724]: E1212 18:44:56.370274 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:44:56.904748 kubelet[2724]: E1212 18:44:56.904655 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:44:56.906761 kubelet[2724]: E1212 18:44:56.905253 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:44:58.905146 kubelet[2724]: E1212 18:44:58.904179 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:44:59.903529 kubelet[2724]: E1212 18:44:59.903385 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:45:03.903038 kubelet[2724]: E1212 18:45:03.902945 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:05.904438 containerd[1559]: time="2025-12-12T18:45:05.904381025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:06.032335 containerd[1559]: time="2025-12-12T18:45:06.032260370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:06.033379 containerd[1559]: time="2025-12-12T18:45:06.033280884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:06.033379 containerd[1559]: time="2025-12-12T18:45:06.033317354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:06.033568 kubelet[2724]: E1212 18:45:06.033505 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:06.033568 kubelet[2724]: E1212 18:45:06.033562 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:06.034419 kubelet[2724]: E1212 18:45:06.033937 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxf6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:06.034518 containerd[1559]: time="2025-12-12T18:45:06.034063247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:45:06.035250 kubelet[2724]: E1212 18:45:06.035221 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:45:06.184784 containerd[1559]: time="2025-12-12T18:45:06.184607031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:06.185727 containerd[1559]: time="2025-12-12T18:45:06.185691865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:45:06.185826 containerd[1559]: time="2025-12-12T18:45:06.185760465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:45:06.186020 kubelet[2724]: E1212 18:45:06.185968 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:06.186083 kubelet[2724]: E1212 18:45:06.186035 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:45:06.186258 kubelet[2724]: E1212 18:45:06.186180 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:06.187633 kubelet[2724]: E1212 18:45:06.187586 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:45:07.905228 kubelet[2724]: E1212 18:45:07.905165 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:45:08.906879 kubelet[2724]: E1212 18:45:08.906825 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:45:13.904305 containerd[1559]: time="2025-12-12T18:45:13.904180838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:45:14.035646 containerd[1559]: time="2025-12-12T18:45:14.035560621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:14.036809 containerd[1559]: time="2025-12-12T18:45:14.036686485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:45:14.036809 containerd[1559]: time="2025-12-12T18:45:14.036736625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:45:14.037175 kubelet[2724]: E1212 18:45:14.037099 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:45:14.037175 kubelet[2724]: E1212 18:45:14.037170 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:45:14.037898 kubelet[2724]: E1212 18:45:14.037321 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:14.040347 containerd[1559]: time="2025-12-12T18:45:14.040071225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:45:14.175943 containerd[1559]: time="2025-12-12T18:45:14.175383944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:14.176703 containerd[1559]: time="2025-12-12T18:45:14.176632088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:45:14.176831 containerd[1559]: time="2025-12-12T18:45:14.176689719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:45:14.177129 kubelet[2724]: E1212 18:45:14.177043 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:45:14.177329 kubelet[2724]: E1212 18:45:14.177097 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:45:14.177329 kubelet[2724]: E1212 18:45:14.177276 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:14.178579 kubelet[2724]: E1212 18:45:14.178541 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:45:14.907185 containerd[1559]: time="2025-12-12T18:45:14.906683984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:45:15.039549 containerd[1559]: time="2025-12-12T18:45:15.039487711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:15.040892 containerd[1559]: time="2025-12-12T18:45:15.040778476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:45:15.040892 containerd[1559]: time="2025-12-12T18:45:15.040854356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:15.041123 kubelet[2724]: E1212 18:45:15.041067 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:15.042167 kubelet[2724]: E1212 18:45:15.041155 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:45:15.042167 kubelet[2724]: E1212 18:45:15.041353 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8hmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rmclv_calico-system(0ba47eaa-f04d-4e71-87de-91abc04e7d96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:15.042640 kubelet[2724]: E1212 18:45:15.042573 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:45:17.904723 kubelet[2724]: E1212 18:45:17.904447 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:45:18.907129 kubelet[2724]: E1212 18:45:18.906520 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:45:19.903839 kubelet[2724]: E1212 18:45:19.903779 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:45:20.906819 containerd[1559]: time="2025-12-12T18:45:20.906718825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:45:21.049982 containerd[1559]: time="2025-12-12T18:45:21.049897181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:45:21.051174 containerd[1559]: time="2025-12-12T18:45:21.051123104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:45:21.051263 containerd[1559]: time="2025-12-12T18:45:21.051202054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:45:21.051673 kubelet[2724]: E1212 18:45:21.051600 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:21.052815 kubelet[2724]: E1212 18:45:21.052172 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:45:21.053003 kubelet[2724]: E1212 18:45:21.052951 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c6mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-9p2sj_calico-apiserver(307920b5-5337-43c8-8c09-6d8750b41212): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:45:21.054339 kubelet[2724]: E1212 18:45:21.054254 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:45:24.907280 kubelet[2724]: E1212 18:45:24.907157 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:45:26.906302 kubelet[2724]: E1212 18:45:26.906163 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:45:28.905505 kubelet[2724]: E1212 18:45:28.904326 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:45:32.882424 systemd[1]: Started sshd@7-172.239.194.183:22-139.178.68.195:43888.service - OpenSSH per-connection server daemon (139.178.68.195:43888). Dec 12 18:45:32.912015 kubelet[2724]: E1212 18:45:32.911931 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:45:33.246079 sshd[4909]: Accepted publickey for core from 139.178.68.195 port 43888 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:33.250090 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:33.258525 systemd-logind[1526]: New session 8 of user core. Dec 12 18:45:33.263400 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:45:33.589358 sshd[4912]: Connection closed by 139.178.68.195 port 43888 Dec 12 18:45:33.590544 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:33.596216 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:45:33.596558 systemd[1]: sshd@7-172.239.194.183:22-139.178.68.195:43888.service: Deactivated successfully. Dec 12 18:45:33.600031 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:45:33.603623 systemd-logind[1526]: Removed session 8. Dec 12 18:45:33.905048 kubelet[2724]: E1212 18:45:33.904750 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:45:33.912126 kubelet[2724]: E1212 18:45:33.911811 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:45:38.653621 systemd[1]: Started sshd@8-172.239.194.183:22-139.178.68.195:43900.service - OpenSSH per-connection server daemon (139.178.68.195:43900). Dec 12 18:45:38.914931 kubelet[2724]: E1212 18:45:38.914359 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:45:39.023913 sshd[4951]: Accepted publickey for core from 139.178.68.195 port 43900 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:39.027048 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:39.034919 systemd-logind[1526]: New session 9 of user core. Dec 12 18:45:39.040268 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:45:39.386942 sshd[4958]: Connection closed by 139.178.68.195 port 43900 Dec 12 18:45:39.388136 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:39.395194 systemd[1]: sshd@8-172.239.194.183:22-139.178.68.195:43900.service: Deactivated successfully. Dec 12 18:45:39.398088 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:45:39.401580 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:45:39.403720 systemd-logind[1526]: Removed session 9. Dec 12 18:45:39.903931 kubelet[2724]: E1212 18:45:39.903591 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:45:39.903931 kubelet[2724]: E1212 18:45:39.903875 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:45:40.904159 kubelet[2724]: E1212 18:45:40.903828 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:41.903015 kubelet[2724]: E1212 18:45:41.902743 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:44.450435 systemd[1]: Started sshd@9-172.239.194.183:22-139.178.68.195:56454.service - OpenSSH per-connection server daemon (139.178.68.195:56454). Dec 12 18:45:44.787301 sshd[4971]: Accepted publickey for core from 139.178.68.195 port 56454 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:44.789734 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:44.798700 systemd-logind[1526]: New session 10 of user core. Dec 12 18:45:44.806501 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:45:44.908671 kubelet[2724]: E1212 18:45:44.908645 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:44.910764 kubelet[2724]: E1212 18:45:44.910703 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:45:45.148205 sshd[4974]: Connection closed by 139.178.68.195 port 56454 Dec 12 18:45:45.149899 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:45.154487 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:45:45.155215 systemd[1]: sshd@9-172.239.194.183:22-139.178.68.195:56454.service: Deactivated successfully. Dec 12 18:45:45.158145 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:45:45.161981 systemd-logind[1526]: Removed session 10. Dec 12 18:45:45.208326 systemd[1]: Started sshd@10-172.239.194.183:22-139.178.68.195:56470.service - OpenSSH per-connection server daemon (139.178.68.195:56470). Dec 12 18:45:45.543301 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 56470 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:45.544936 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:45.552296 systemd-logind[1526]: New session 11 of user core. Dec 12 18:45:45.554393 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:45:45.894550 sshd[4990]: Connection closed by 139.178.68.195 port 56470 Dec 12 18:45:45.893875 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:45.898442 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:45:45.899368 systemd[1]: sshd@10-172.239.194.183:22-139.178.68.195:56470.service: Deactivated successfully. Dec 12 18:45:45.903459 kubelet[2724]: E1212 18:45:45.903225 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:45:45.903462 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:45:45.906644 systemd-logind[1526]: Removed session 11. Dec 12 18:45:45.956432 systemd[1]: Started sshd@11-172.239.194.183:22-139.178.68.195:56480.service - OpenSSH per-connection server daemon (139.178.68.195:56480). Dec 12 18:45:46.310837 sshd[5000]: Accepted publickey for core from 139.178.68.195 port 56480 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:46.312650 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:46.322024 systemd-logind[1526]: New session 12 of user core. Dec 12 18:45:46.327257 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:45:46.666483 sshd[5003]: Connection closed by 139.178.68.195 port 56480 Dec 12 18:45:46.670705 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:46.677311 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:45:46.678338 systemd[1]: sshd@11-172.239.194.183:22-139.178.68.195:56480.service: Deactivated successfully. Dec 12 18:45:46.683519 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:45:46.689163 systemd-logind[1526]: Removed session 12. Dec 12 18:45:46.906250 kubelet[2724]: E1212 18:45:46.905517 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:45:49.903372 kubelet[2724]: E1212 18:45:49.902725 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:50.913916 kubelet[2724]: E1212 18:45:50.913761 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:45:51.735059 systemd[1]: Started sshd@12-172.239.194.183:22-139.178.68.195:44470.service - OpenSSH per-connection server daemon (139.178.68.195:44470). Dec 12 18:45:51.904291 kubelet[2724]: E1212 18:45:51.903926 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:45:52.094936 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 44470 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:52.099315 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:52.107168 systemd-logind[1526]: New session 13 of user core. Dec 12 18:45:52.112340 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:45:52.418358 sshd[5019]: Connection closed by 139.178.68.195 port 44470 Dec 12 18:45:52.419228 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:52.424367 systemd[1]: sshd@12-172.239.194.183:22-139.178.68.195:44470.service: Deactivated successfully. Dec 12 18:45:52.429954 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:45:52.432388 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:45:52.434737 systemd-logind[1526]: Removed session 13. Dec 12 18:45:53.904201 kubelet[2724]: E1212 18:45:53.904153 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:45:57.483736 systemd[1]: Started sshd@13-172.239.194.183:22-139.178.68.195:44484.service - OpenSSH per-connection server daemon (139.178.68.195:44484). Dec 12 18:45:57.836904 sshd[5031]: Accepted publickey for core from 139.178.68.195 port 44484 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:57.839783 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:57.848660 systemd-logind[1526]: New session 14 of user core. Dec 12 18:45:57.855232 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:45:57.903054 kubelet[2724]: E1212 18:45:57.902978 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:45:57.903054 kubelet[2724]: E1212 18:45:57.902998 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:45:58.180146 sshd[5034]: Connection closed by 139.178.68.195 port 44484 Dec 12 18:45:58.180947 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:58.188665 systemd[1]: sshd@13-172.239.194.183:22-139.178.68.195:44484.service: Deactivated successfully. Dec 12 18:45:58.192086 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:45:58.193177 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:45:58.195327 systemd-logind[1526]: Removed session 14. Dec 12 18:45:58.903993 kubelet[2724]: E1212 18:45:58.903936 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:45:59.904904 kubelet[2724]: E1212 18:45:59.904746 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:46:03.249192 systemd[1]: Started sshd@14-172.239.194.183:22-139.178.68.195:34316.service - OpenSSH per-connection server daemon (139.178.68.195:34316). Dec 12 18:46:03.600958 sshd[5048]: Accepted publickey for core from 139.178.68.195 port 34316 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:03.602668 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:03.608289 systemd-logind[1526]: New session 15 of user core. Dec 12 18:46:03.615238 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:46:03.939048 sshd[5051]: Connection closed by 139.178.68.195 port 34316 Dec 12 18:46:03.940293 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:03.945280 systemd[1]: sshd@14-172.239.194.183:22-139.178.68.195:34316.service: Deactivated successfully. Dec 12 18:46:03.947307 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:46:03.948512 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:46:03.950094 systemd-logind[1526]: Removed session 15. Dec 12 18:46:04.904724 kubelet[2724]: E1212 18:46:04.904508 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:46:04.905084 kubelet[2724]: E1212 18:46:04.905033 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:46:05.903692 kubelet[2724]: E1212 18:46:05.903608 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:46:06.906798 kubelet[2724]: E1212 18:46:06.906134 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:46:08.999094 systemd[1]: Started sshd@15-172.239.194.183:22-139.178.68.195:34330.service - OpenSSH per-connection server daemon (139.178.68.195:34330). Dec 12 18:46:09.333993 sshd[5093]: Accepted publickey for core from 139.178.68.195 port 34330 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:09.334508 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:09.339876 systemd-logind[1526]: New session 16 of user core. Dec 12 18:46:09.344213 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:46:09.654267 sshd[5096]: Connection closed by 139.178.68.195 port 34330 Dec 12 18:46:09.655039 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:09.659651 systemd[1]: sshd@15-172.239.194.183:22-139.178.68.195:34330.service: Deactivated successfully. Dec 12 18:46:09.662939 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:46:09.666429 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:46:09.668510 systemd-logind[1526]: Removed session 16. Dec 12 18:46:09.903023 kubelet[2724]: E1212 18:46:09.902944 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:46:11.903156 kubelet[2724]: E1212 18:46:11.902869 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:46:12.910192 kubelet[2724]: E1212 18:46:12.909428 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:46:14.719336 systemd[1]: Started sshd@16-172.239.194.183:22-139.178.68.195:36002.service - OpenSSH per-connection server daemon (139.178.68.195:36002). Dec 12 18:46:15.061400 sshd[5109]: Accepted publickey for core from 139.178.68.195 port 36002 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:15.062207 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:15.070842 systemd-logind[1526]: New session 17 of user core. Dec 12 18:46:15.074239 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:46:15.380324 sshd[5117]: Connection closed by 139.178.68.195 port 36002 Dec 12 18:46:15.382367 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:15.387687 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:46:15.388770 systemd[1]: sshd@16-172.239.194.183:22-139.178.68.195:36002.service: Deactivated successfully. Dec 12 18:46:15.391745 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:46:15.393950 systemd-logind[1526]: Removed session 17. Dec 12 18:46:15.904660 kubelet[2724]: E1212 18:46:15.904388 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:46:17.902737 kubelet[2724]: E1212 18:46:17.902665 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:46:19.904918 kubelet[2724]: E1212 18:46:19.904700 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:46:20.456372 systemd[1]: Started sshd@17-172.239.194.183:22-139.178.68.195:49570.service - OpenSSH per-connection server daemon (139.178.68.195:49570). Dec 12 18:46:20.800788 sshd[5130]: Accepted publickey for core from 139.178.68.195 port 49570 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:20.804224 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:20.811317 systemd-logind[1526]: New session 18 of user core. Dec 12 18:46:20.816684 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:46:21.145013 sshd[5133]: Connection closed by 139.178.68.195 port 49570 Dec 12 18:46:21.146440 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:21.154098 systemd[1]: sshd@17-172.239.194.183:22-139.178.68.195:49570.service: Deactivated successfully. Dec 12 18:46:21.158899 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:46:21.163773 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:46:21.166746 systemd-logind[1526]: Removed session 18. Dec 12 18:46:21.211962 systemd[1]: Started sshd@18-172.239.194.183:22-139.178.68.195:49580.service - OpenSSH per-connection server daemon (139.178.68.195:49580). Dec 12 18:46:21.575166 sshd[5145]: Accepted publickey for core from 139.178.68.195 port 49580 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:21.576408 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:21.582386 systemd-logind[1526]: New session 19 of user core. Dec 12 18:46:21.590889 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:46:21.903187 kubelet[2724]: E1212 18:46:21.902995 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:46:21.906963 kubelet[2724]: E1212 18:46:21.906881 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:46:21.995156 sshd[5148]: Connection closed by 139.178.68.195 port 49580 Dec 12 18:46:21.995919 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:22.001362 systemd[1]: sshd@18-172.239.194.183:22-139.178.68.195:49580.service: Deactivated successfully. Dec 12 18:46:22.001971 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:46:22.004333 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:46:22.008232 systemd-logind[1526]: Removed session 19. Dec 12 18:46:22.064324 systemd[1]: Started sshd@19-172.239.194.183:22-139.178.68.195:49586.service - OpenSSH per-connection server daemon (139.178.68.195:49586). Dec 12 18:46:22.415792 sshd[5158]: Accepted publickey for core from 139.178.68.195 port 49586 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:22.419391 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:22.426060 systemd-logind[1526]: New session 20 of user core. Dec 12 18:46:22.432246 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:46:23.308732 sshd[5161]: Connection closed by 139.178.68.195 port 49586 Dec 12 18:46:23.309359 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:23.314961 systemd[1]: sshd@19-172.239.194.183:22-139.178.68.195:49586.service: Deactivated successfully. Dec 12 18:46:23.319043 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:46:23.321351 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:46:23.324765 systemd-logind[1526]: Removed session 20. Dec 12 18:46:23.376308 systemd[1]: Started sshd@20-172.239.194.183:22-139.178.68.195:49592.service - OpenSSH per-connection server daemon (139.178.68.195:49592). Dec 12 18:46:23.727667 sshd[5180]: Accepted publickey for core from 139.178.68.195 port 49592 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:23.732020 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:23.740864 systemd-logind[1526]: New session 21 of user core. Dec 12 18:46:23.747250 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:46:23.903542 kubelet[2724]: E1212 18:46:23.903503 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:46:24.205994 sshd[5185]: Connection closed by 139.178.68.195 port 49592 Dec 12 18:46:24.206880 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:24.212639 systemd[1]: sshd@20-172.239.194.183:22-139.178.68.195:49592.service: Deactivated successfully. Dec 12 18:46:24.216469 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:46:24.219129 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:46:24.220937 systemd-logind[1526]: Removed session 21. Dec 12 18:46:24.266302 systemd[1]: Started sshd@21-172.239.194.183:22-139.178.68.195:49596.service - OpenSSH per-connection server daemon (139.178.68.195:49596). Dec 12 18:46:24.619816 sshd[5195]: Accepted publickey for core from 139.178.68.195 port 49596 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:24.623065 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:24.628940 systemd-logind[1526]: New session 22 of user core. Dec 12 18:46:24.634493 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:46:24.943279 sshd[5198]: Connection closed by 139.178.68.195 port 49596 Dec 12 18:46:24.944181 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:24.948201 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:46:24.948994 systemd[1]: sshd@21-172.239.194.183:22-139.178.68.195:49596.service: Deactivated successfully. Dec 12 18:46:24.951201 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:46:24.952829 systemd-logind[1526]: Removed session 22. Dec 12 18:46:27.905065 kubelet[2724]: E1212 18:46:27.904988 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:46:27.905649 containerd[1559]: time="2025-12-12T18:46:27.905609036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:46:28.060300 containerd[1559]: time="2025-12-12T18:46:28.059340568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:28.061389 containerd[1559]: time="2025-12-12T18:46:28.061300108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:46:28.061444 containerd[1559]: time="2025-12-12T18:46:28.061398284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:46:28.063479 kubelet[2724]: E1212 18:46:28.063228 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:46:28.063479 kubelet[2724]: E1212 18:46:28.063280 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:46:28.063479 kubelet[2724]: E1212 18:46:28.063399 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:616f14a12ce949ddb0ea243c4dc4501f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:28.065366 containerd[1559]: time="2025-12-12T18:46:28.065322013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:46:28.237596 containerd[1559]: time="2025-12-12T18:46:28.237517199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:28.238738 containerd[1559]: time="2025-12-12T18:46:28.238618569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:46:28.238738 containerd[1559]: time="2025-12-12T18:46:28.238681047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:46:28.239771 kubelet[2724]: E1212 18:46:28.239410 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:46:28.239919 kubelet[2724]: E1212 18:46:28.239868 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:46:28.240099 kubelet[2724]: E1212 18:46:28.240041 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675b66b756-wh4s2_calico-system(387f967a-d27c-485c-aeed-91421d359fb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:28.241552 kubelet[2724]: E1212 18:46:28.241480 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:46:30.008348 systemd[1]: Started sshd@22-172.239.194.183:22-139.178.68.195:49598.service - OpenSSH per-connection server daemon (139.178.68.195:49598). Dec 12 18:46:30.348339 sshd[5210]: Accepted publickey for core from 139.178.68.195 port 49598 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:30.349618 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:30.355351 systemd-logind[1526]: New session 23 of user core. Dec 12 18:46:30.361265 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:46:30.679231 sshd[5213]: Connection closed by 139.178.68.195 port 49598 Dec 12 18:46:30.679885 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:30.687180 systemd[1]: sshd@22-172.239.194.183:22-139.178.68.195:49598.service: Deactivated successfully. Dec 12 18:46:30.693272 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:46:30.699362 systemd-logind[1526]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:46:30.702034 systemd-logind[1526]: Removed session 23. Dec 12 18:46:30.903147 kubelet[2724]: E1212 18:46:30.902884 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 12 18:46:30.904793 kubelet[2724]: E1212 18:46:30.904734 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rmclv" podUID="0ba47eaa-f04d-4e71-87de-91abc04e7d96" Dec 12 18:46:31.902740 containerd[1559]: time="2025-12-12T18:46:31.902669177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:46:32.049552 containerd[1559]: time="2025-12-12T18:46:32.049478712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:32.050812 containerd[1559]: time="2025-12-12T18:46:32.050740670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:46:32.051013 containerd[1559]: time="2025-12-12T18:46:32.050849706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:46:32.051206 kubelet[2724]: E1212 18:46:32.051162 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:46:32.051649 kubelet[2724]: E1212 18:46:32.051212 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:46:32.051649 kubelet[2724]: E1212 18:46:32.051509 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxf6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db8fdd69c-npwn2_calico-apiserver(83ec1740-f7bd-4f51-a6be-a16783749dd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:32.052897 kubelet[2724]: E1212 18:46:32.052844 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-npwn2" podUID="83ec1740-f7bd-4f51-a6be-a16783749dd3" Dec 12 18:46:33.902520 containerd[1559]: time="2025-12-12T18:46:33.902484713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:46:34.056479 containerd[1559]: time="2025-12-12T18:46:34.056357295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:34.058964 containerd[1559]: time="2025-12-12T18:46:34.057630463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:46:34.058964 containerd[1559]: time="2025-12-12T18:46:34.057724610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:46:34.059991 kubelet[2724]: E1212 18:46:34.059289 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:46:34.059991 kubelet[2724]: E1212 18:46:34.059363 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:46:34.059991 kubelet[2724]: E1212 18:46:34.059528 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhqtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8d85bd9b7-nqzx2_calico-system(0dd6b358-9691-4ff7-9c07-9faa3b6a5832): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:34.060654 kubelet[2724]: E1212 18:46:34.060616 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8d85bd9b7-nqzx2" podUID="0dd6b358-9691-4ff7-9c07-9faa3b6a5832" Dec 12 18:46:35.744317 systemd[1]: Started sshd@23-172.239.194.183:22-139.178.68.195:45836.service - OpenSSH per-connection server daemon (139.178.68.195:45836). Dec 12 18:46:35.904420 kubelet[2724]: E1212 18:46:35.904131 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db8fdd69c-9p2sj" podUID="307920b5-5337-43c8-8c09-6d8750b41212" Dec 12 18:46:36.093705 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 45836 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:36.095739 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:36.101507 systemd-logind[1526]: New session 24 of user core. Dec 12 18:46:36.108261 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:46:36.394786 sshd[5230]: Connection closed by 139.178.68.195 port 45836 Dec 12 18:46:36.395161 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:36.401048 systemd[1]: sshd@23-172.239.194.183:22-139.178.68.195:45836.service: Deactivated successfully. Dec 12 18:46:36.403766 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:46:36.405329 systemd-logind[1526]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:46:36.407549 systemd-logind[1526]: Removed session 24. Dec 12 18:46:38.905754 containerd[1559]: time="2025-12-12T18:46:38.905713644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:46:39.054770 containerd[1559]: time="2025-12-12T18:46:39.054694697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:39.056094 containerd[1559]: time="2025-12-12T18:46:39.056015046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:46:39.056094 containerd[1559]: time="2025-12-12T18:46:39.056068124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:46:39.056677 kubelet[2724]: E1212 18:46:39.056636 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:39.057453 kubelet[2724]: E1212 18:46:39.057055 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:46:39.057741 kubelet[2724]: E1212 18:46:39.057700 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:39.059813 containerd[1559]: time="2025-12-12T18:46:39.059769110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:46:39.185497 containerd[1559]: time="2025-12-12T18:46:39.185172673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:46:39.186459 containerd[1559]: time="2025-12-12T18:46:39.186425725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:46:39.186641 containerd[1559]: time="2025-12-12T18:46:39.186536011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:46:39.186975 kubelet[2724]: E1212 18:46:39.186924 2724 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:39.187201 kubelet[2724]: E1212 18:46:39.187097 2724 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:46:39.189168 kubelet[2724]: E1212 18:46:39.187469 2724 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n46fl_calico-system(af20f1b0-b34b-412e-a0a1-b4c0cada074e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:46:39.190595 kubelet[2724]: E1212 18:46:39.190536 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n46fl" podUID="af20f1b0-b34b-412e-a0a1-b4c0cada074e" Dec 12 18:46:41.458976 systemd[1]: Started sshd@24-172.239.194.183:22-139.178.68.195:32926.service - OpenSSH per-connection server daemon (139.178.68.195:32926). Dec 12 18:46:41.797533 sshd[5267]: Accepted publickey for core from 139.178.68.195 port 32926 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:46:41.799263 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:46:41.805006 systemd-logind[1526]: New session 25 of user core. Dec 12 18:46:41.810474 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:46:41.904372 kubelet[2724]: E1212 18:46:41.904284 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675b66b756-wh4s2" podUID="387f967a-d27c-485c-aeed-91421d359fb6" Dec 12 18:46:42.148719 sshd[5270]: Connection closed by 139.178.68.195 port 32926 Dec 12 18:46:42.149507 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:42.155054 systemd-logind[1526]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:46:42.156502 systemd[1]: sshd@24-172.239.194.183:22-139.178.68.195:32926.service: Deactivated successfully. Dec 12 18:46:42.160660 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:46:42.163521 systemd-logind[1526]: Removed session 25. Dec 12 18:46:42.903933 kubelet[2724]: E1212 18:46:42.903485 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"