Nov 5 00:11:51.440200 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 5 00:11:51.440273 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:11:51.440287 kernel: BIOS-provided physical RAM map: Nov 5 00:11:51.440296 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 5 00:11:51.440304 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 5 00:11:51.440331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 00:11:51.440343 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 5 00:11:51.440362 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 5 00:11:51.440375 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 00:11:51.440384 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 00:11:51.440393 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 00:11:51.440401 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 00:11:51.440410 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 5 00:11:51.440437 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 00:11:51.440450 kernel: NX (Execute Disable) protection: active Nov 5 00:11:51.440460 kernel: APIC: Static calls initialized Nov 5 00:11:51.440470 kernel: SMBIOS 2.8 present. Nov 5 00:11:51.440479 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 5 00:11:51.440508 kernel: DMI: Memory slots populated: 1/1 Nov 5 00:11:51.440519 kernel: Hypervisor detected: KVM Nov 5 00:11:51.440528 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 00:11:51.440537 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 00:11:51.440547 kernel: kvm-clock: using sched offset of 9979060260 cycles Nov 5 00:11:51.440557 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 00:11:51.440567 kernel: tsc: Detected 2000.000 MHz processor Nov 5 00:11:51.440577 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 00:11:51.440587 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 00:11:51.440621 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 5 00:11:51.440667 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 00:11:51.440681 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 00:11:51.440691 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 00:11:51.440701 kernel: Using GB pages for direct mapping Nov 5 00:11:51.440711 kernel: ACPI: Early table checksum verification disabled Nov 5 00:11:51.440720 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 5 00:11:51.440751 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440762 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440772 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440782 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 00:11:51.440792 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440801 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440846 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440859 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:11:51.440869 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 5 00:11:51.440880 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 5 00:11:51.440890 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 00:11:51.440920 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 5 00:11:51.440931 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 5 00:11:51.440941 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 5 00:11:51.440951 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 5 00:11:51.440961 kernel: No NUMA configuration found Nov 5 00:11:51.440971 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 5 00:11:51.440981 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Nov 5 00:11:51.441024 kernel: Zone ranges: Nov 5 00:11:51.441049 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 00:11:51.441067 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 5 00:11:51.441084 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 5 00:11:51.441099 kernel: Device empty Nov 5 00:11:51.441110 kernel: Movable zone start for each node Nov 5 00:11:51.441120 kernel: Early memory node ranges Nov 5 00:11:51.441130 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 00:11:51.441162 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 5 00:11:51.441173 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 5 00:11:51.441184 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 5 00:11:51.441194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 00:11:51.441204 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 00:11:51.441222 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 5 00:11:51.441236 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 00:11:51.441265 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 00:11:51.441278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 00:11:51.441289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 00:11:51.441299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 00:11:51.441309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 00:11:51.441319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 00:11:51.441329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 00:11:51.441357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 00:11:51.441370 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 00:11:51.441380 kernel: TSC deadline timer available Nov 5 00:11:51.441390 kernel: CPU topo: Max. logical packages: 1 Nov 5 00:11:51.441400 kernel: CPU topo: Max. logical dies: 1 Nov 5 00:11:51.441410 kernel: CPU topo: Max. dies per package: 1 Nov 5 00:11:51.441420 kernel: CPU topo: Max. threads per core: 1 Nov 5 00:11:51.441447 kernel: CPU topo: Num. cores per package: 2 Nov 5 00:11:51.441460 kernel: CPU topo: Num. threads per package: 2 Nov 5 00:11:51.441470 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 00:11:51.441480 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 00:11:51.441490 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 00:11:51.441500 kernel: kvm-guest: setup PV sched yield Nov 5 00:11:51.441510 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 00:11:51.441520 kernel: Booting paravirtualized kernel on KVM Nov 5 00:11:51.441547 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 00:11:51.441560 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 00:11:51.441570 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 00:11:51.441581 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 00:11:51.441591 kernel: pcpu-alloc: [0] 0 1 Nov 5 00:11:51.441601 kernel: kvm-guest: PV spinlocks enabled Nov 5 00:11:51.441611 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 00:11:51.441659 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:11:51.441673 kernel: random: crng init done Nov 5 00:11:51.441683 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 00:11:51.441693 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 00:11:51.441704 kernel: Fallback order for Node 0: 0 Nov 5 00:11:51.441714 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 5 00:11:51.441724 kernel: Policy zone: Normal Nov 5 00:11:51.441755 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 00:11:51.441767 kernel: software IO TLB: area num 2. Nov 5 00:11:51.441777 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 00:11:51.441787 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 00:11:51.441797 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 00:11:51.441807 kernel: Dynamic Preempt: voluntary Nov 5 00:11:51.441817 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 00:11:51.441848 kernel: rcu: RCU event tracing is enabled. Nov 5 00:11:51.441860 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 00:11:51.441871 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 00:11:51.441881 kernel: Rude variant of Tasks RCU enabled. Nov 5 00:11:51.441891 kernel: Tracing variant of Tasks RCU enabled. Nov 5 00:11:51.441901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 00:11:51.441912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 00:11:51.441940 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:11:51.441997 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:11:51.442026 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:11:51.442038 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 00:11:51.442047 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 00:11:51.442056 kernel: Console: colour VGA+ 80x25 Nov 5 00:11:51.442065 kernel: printk: legacy console [tty0] enabled Nov 5 00:11:51.442074 kernel: printk: legacy console [ttyS0] enabled Nov 5 00:11:51.442083 kernel: ACPI: Core revision 20240827 Nov 5 00:11:51.442112 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 00:11:51.442122 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 00:11:51.442131 kernel: x2apic enabled Nov 5 00:11:51.442154 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 00:11:51.442190 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 00:11:51.442202 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 00:11:51.442211 kernel: kvm-guest: setup PV IPIs Nov 5 00:11:51.442220 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 00:11:51.442229 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 5 00:11:51.442238 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 5 00:11:51.442247 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 00:11:51.442274 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 00:11:51.442285 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 00:11:51.442295 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 00:11:51.442304 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 00:11:51.442313 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 00:11:51.442322 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 00:11:51.442331 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 00:11:51.442359 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 00:11:51.442370 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 00:11:51.442381 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 00:11:51.442390 kernel: active return thunk: srso_alias_return_thunk Nov 5 00:11:51.442399 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 00:11:51.442408 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 5 00:11:51.442417 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 00:11:51.442453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 00:11:51.442473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 00:11:51.442487 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 00:11:51.442497 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 00:11:51.442506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 00:11:51.442515 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 5 00:11:51.442524 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 5 00:11:51.442564 kernel: Freeing SMP alternatives memory: 32K Nov 5 00:11:51.442579 kernel: pid_max: default: 32768 minimum: 301 Nov 5 00:11:51.442588 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 00:11:51.442597 kernel: landlock: Up and running. Nov 5 00:11:51.442606 kernel: SELinux: Initializing. Nov 5 00:11:51.442615 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:11:51.442624 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:11:51.442676 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 5 00:11:51.442686 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 00:11:51.442695 kernel: ... version: 0 Nov 5 00:11:51.442704 kernel: ... bit width: 48 Nov 5 00:11:51.442713 kernel: ... generic registers: 6 Nov 5 00:11:51.442722 kernel: ... value mask: 0000ffffffffffff Nov 5 00:11:51.442730 kernel: ... max period: 00007fffffffffff Nov 5 00:11:51.442758 kernel: ... fixed-purpose events: 0 Nov 5 00:11:51.442770 kernel: ... event mask: 000000000000003f Nov 5 00:11:51.442779 kernel: signal: max sigframe size: 3376 Nov 5 00:11:51.442788 kernel: rcu: Hierarchical SRCU implementation. Nov 5 00:11:51.442797 kernel: rcu: Max phase no-delay instances is 400. Nov 5 00:11:51.442806 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 00:11:51.442815 kernel: smp: Bringing up secondary CPUs ... Nov 5 00:11:51.442842 kernel: smpboot: x86: Booting SMP configuration: Nov 5 00:11:51.442853 kernel: .... node #0, CPUs: #1 Nov 5 00:11:51.442863 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 00:11:51.442872 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 5 00:11:51.442881 kernel: Memory: 3984336K/4193772K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 204760K reserved, 0K cma-reserved) Nov 5 00:11:51.442890 kernel: devtmpfs: initialized Nov 5 00:11:51.442899 kernel: x86/mm: Memory block size: 128MB Nov 5 00:11:51.442908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 00:11:51.442936 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 00:11:51.442946 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 00:11:51.442955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 00:11:51.442965 kernel: audit: initializing netlink subsys (disabled) Nov 5 00:11:51.442974 kernel: audit: type=2000 audit(1762301507.471:1): state=initialized audit_enabled=0 res=1 Nov 5 00:11:51.442983 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 00:11:51.442991 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 00:11:51.443020 kernel: cpuidle: using governor menu Nov 5 00:11:51.443029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 00:11:51.443038 kernel: dca service started, version 1.12.1 Nov 5 00:11:51.443047 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 00:11:51.443056 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 00:11:51.443064 kernel: PCI: Using configuration type 1 for base access Nov 5 00:11:51.443073 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 00:11:51.443102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 00:11:51.443111 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 00:11:51.443120 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 00:11:51.443128 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 00:11:51.443137 kernel: ACPI: Added _OSI(Module Device) Nov 5 00:11:51.443146 kernel: ACPI: Added _OSI(Processor Device) Nov 5 00:11:51.443154 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 00:11:51.443182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 00:11:51.443192 kernel: ACPI: Interpreter enabled Nov 5 00:11:51.443201 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 00:11:51.443209 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 00:11:51.443218 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 00:11:51.443227 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 00:11:51.443236 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 00:11:51.443264 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 00:11:51.443738 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 00:11:51.443991 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 00:11:51.444298 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 00:11:51.444314 kernel: PCI host bridge to bus 0000:00 Nov 5 00:11:51.444676 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 00:11:51.445025 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 00:11:51.445278 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 00:11:51.445525 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 5 00:11:51.445833 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 00:11:51.446084 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 5 00:11:51.447040 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 00:11:51.447869 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 00:11:51.448140 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 00:11:51.448559 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 00:11:51.448998 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 00:11:51.449446 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 00:11:51.449726 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 00:11:51.450005 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 00:11:51.450261 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 5 00:11:51.456176 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 00:11:51.456480 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 00:11:51.456854 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 00:11:51.457095 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 5 00:11:51.457330 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 00:11:51.458414 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 00:11:51.458677 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 00:11:51.458918 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 00:11:51.459177 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 00:11:51.459423 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 00:11:51.459684 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 5 00:11:51.459944 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 5 00:11:51.460204 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 00:11:51.460531 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 00:11:51.460547 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 00:11:51.460557 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 00:11:51.460566 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 00:11:51.460575 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 00:11:51.460583 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 00:11:51.460592 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 00:11:51.460623 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 00:11:51.460659 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 00:11:51.460671 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 00:11:51.460680 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 00:11:51.460689 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 00:11:51.460697 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 00:11:51.460706 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 00:11:51.460736 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 00:11:51.460746 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 00:11:51.460754 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 00:11:51.460763 kernel: iommu: Default domain type: Translated Nov 5 00:11:51.460772 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 00:11:51.460781 kernel: PCI: Using ACPI for IRQ routing Nov 5 00:11:51.460789 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 00:11:51.460817 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 5 00:11:51.460828 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 5 00:11:51.461059 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 00:11:51.461275 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 00:11:51.461492 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 00:11:51.461504 kernel: vgaarb: loaded Nov 5 00:11:51.461513 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 00:11:51.461545 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 00:11:51.461555 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 00:11:51.461564 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 00:11:51.461574 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 00:11:51.461582 kernel: pnp: PnP ACPI init Nov 5 00:11:51.461964 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 00:11:51.462000 kernel: pnp: PnP ACPI: found 5 devices Nov 5 00:11:51.462013 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 00:11:51.462022 kernel: NET: Registered PF_INET protocol family Nov 5 00:11:51.462031 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 00:11:51.462040 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 00:11:51.462050 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 00:11:51.462059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 00:11:51.462086 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 00:11:51.462098 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 00:11:51.462107 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:11:51.462117 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:11:51.462126 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 00:11:51.462135 kernel: NET: Registered PF_XDP protocol family Nov 5 00:11:51.462354 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 00:11:51.462594 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 00:11:51.462839 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 00:11:51.463046 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 5 00:11:51.463251 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 00:11:51.463449 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 5 00:11:51.463461 kernel: PCI: CLS 0 bytes, default 64 Nov 5 00:11:51.463470 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 5 00:11:51.463505 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 5 00:11:51.463516 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 5 00:11:51.463525 kernel: Initialise system trusted keyrings Nov 5 00:11:51.463534 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 00:11:51.463543 kernel: Key type asymmetric registered Nov 5 00:11:51.463552 kernel: Asymmetric key parser 'x509' registered Nov 5 00:11:51.463561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 00:11:51.463590 kernel: io scheduler mq-deadline registered Nov 5 00:11:51.463600 kernel: io scheduler kyber registered Nov 5 00:11:51.463609 kernel: io scheduler bfq registered Nov 5 00:11:51.463618 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 00:11:51.463654 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 00:11:51.463670 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 00:11:51.463679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 00:11:51.463711 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 00:11:51.463723 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 00:11:51.463732 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 00:11:51.463741 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 00:11:51.463995 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 00:11:51.464013 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 5 00:11:51.464229 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 00:11:51.464477 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T00:11:48 UTC (1762301508) Nov 5 00:11:51.464806 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 00:11:51.464823 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 00:11:51.464833 kernel: NET: Registered PF_INET6 protocol family Nov 5 00:11:51.464843 kernel: Segment Routing with IPv6 Nov 5 00:11:51.464852 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 00:11:51.464882 kernel: NET: Registered PF_PACKET protocol family Nov 5 00:11:51.464894 kernel: Key type dns_resolver registered Nov 5 00:11:51.464904 kernel: IPI shorthand broadcast: enabled Nov 5 00:11:51.464913 kernel: sched_clock: Marking stable (3610005140, 383420490)->(4144823190, -151397560) Nov 5 00:11:51.464923 kernel: registered taskstats version 1 Nov 5 00:11:51.464933 kernel: Loading compiled-in X.509 certificates Nov 5 00:11:51.464942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 5 00:11:51.464951 kernel: Demotion targets for Node 0: null Nov 5 00:11:51.464980 kernel: Key type .fscrypt registered Nov 5 00:11:51.464990 kernel: Key type fscrypt-provisioning registered Nov 5 00:11:51.465000 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 00:11:51.465009 kernel: ima: Allocated hash algorithm: sha1 Nov 5 00:11:51.465018 kernel: ima: No architecture policies found Nov 5 00:11:51.465027 kernel: clk: Disabling unused clocks Nov 5 00:11:51.465036 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 5 00:11:51.465064 kernel: Write protecting the kernel read-only data: 40960k Nov 5 00:11:51.465074 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 00:11:51.465083 kernel: Run /init as init process Nov 5 00:11:51.465092 kernel: with arguments: Nov 5 00:11:51.465101 kernel: /init Nov 5 00:11:51.465147 kernel: with environment: Nov 5 00:11:51.465156 kernel: HOME=/ Nov 5 00:11:51.465262 kernel: TERM=linux Nov 5 00:11:51.465292 kernel: SCSI subsystem initialized Nov 5 00:11:51.465302 kernel: libata version 3.00 loaded. Nov 5 00:11:51.465536 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 00:11:51.465549 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 00:11:51.465826 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 00:11:51.466064 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 00:11:51.466320 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 00:11:51.466619 kernel: scsi host0: ahci Nov 5 00:11:51.466944 kernel: scsi host1: ahci Nov 5 00:11:51.467209 kernel: scsi host2: ahci Nov 5 00:11:51.467496 kernel: scsi host3: ahci Nov 5 00:11:51.467869 kernel: scsi host4: ahci Nov 5 00:11:51.468145 kernel: scsi host5: ahci Nov 5 00:11:51.468181 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 5 00:11:51.468193 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 5 00:11:51.468203 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 5 00:11:51.468212 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 5 00:11:51.468242 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 5 00:11:51.468253 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 5 00:11:51.468262 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468290 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468301 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468311 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468320 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468349 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 00:11:51.468723 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 5 00:11:51.468969 kernel: scsi host6: Virtio SCSI HBA Nov 5 00:11:51.469245 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 5 00:11:51.469532 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 5 00:11:51.469844 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 5 00:11:51.470141 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 5 00:11:51.470419 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 5 00:11:51.470713 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 5 00:11:51.470730 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 00:11:51.470739 kernel: GPT:25804799 != 167739391 Nov 5 00:11:51.470749 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 00:11:51.470781 kernel: GPT:25804799 != 167739391 Nov 5 00:11:51.470791 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 00:11:51.470801 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 00:11:51.471101 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 5 00:11:51.471118 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 00:11:51.471128 kernel: device-mapper: uevent: version 1.0.3 Nov 5 00:11:51.471158 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 00:11:51.471170 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 00:11:51.471179 kernel: raid6: avx2x4 gen() 30732 MB/s Nov 5 00:11:51.471209 kernel: raid6: avx2x2 gen() 21801 MB/s Nov 5 00:11:51.471219 kernel: raid6: avx2x1 gen() 13349 MB/s Nov 5 00:11:51.471249 kernel: raid6: using algorithm avx2x4 gen() 30732 MB/s Nov 5 00:11:51.471259 kernel: raid6: .... xor() 3026 MB/s, rmw enabled Nov 5 00:11:51.471268 kernel: raid6: using avx2x2 recovery algorithm Nov 5 00:11:51.471278 kernel: xor: automatically using best checksumming function avx Nov 5 00:11:51.471287 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 00:11:51.471297 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (172) Nov 5 00:11:51.471306 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 5 00:11:51.471338 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:11:51.471348 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 00:11:51.471358 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 00:11:51.471367 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 00:11:51.471377 kernel: loop: module loaded Nov 5 00:11:51.471386 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 00:11:51.471395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 00:11:51.471426 systemd[1]: Successfully made /usr/ read-only. Nov 5 00:11:51.471440 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:11:51.471450 systemd[1]: Detected virtualization kvm. Nov 5 00:11:51.471460 systemd[1]: Detected architecture x86-64. Nov 5 00:11:51.471469 systemd[1]: Running in initrd. Nov 5 00:11:51.471478 systemd[1]: No hostname configured, using default hostname. Nov 5 00:11:51.471509 systemd[1]: Hostname set to . Nov 5 00:11:51.471520 systemd[1]: Initializing machine ID from random generator. Nov 5 00:11:51.471529 systemd[1]: Queued start job for default target initrd.target. Nov 5 00:11:51.471539 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:11:51.471548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:11:51.471558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:11:51.471587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 00:11:51.471599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:11:51.471610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 00:11:51.471620 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 00:11:51.471666 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:11:51.471682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:11:51.471715 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:11:51.471725 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:11:51.471735 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:11:51.471745 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:11:51.471755 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:11:51.471765 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:11:51.471775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:11:51.471807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 00:11:51.471817 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 00:11:51.471827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:11:51.471837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:11:51.471846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:11:51.471856 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:11:51.471885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 00:11:51.471897 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 00:11:51.471907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:11:51.471917 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 00:11:51.471927 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 00:11:51.471937 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 00:11:51.471947 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:11:51.471977 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:11:51.471988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:11:51.471998 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 00:11:51.472008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:11:51.472038 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 00:11:51.472048 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 00:11:51.472122 systemd-journald[305]: Collecting audit messages is disabled. Nov 5 00:11:51.472167 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 00:11:51.472177 kernel: Bridge firewalling registered Nov 5 00:11:51.472188 systemd-journald[305]: Journal started Nov 5 00:11:51.472208 systemd-journald[305]: Runtime Journal (/run/log/journal/813bd64bad274ff4a387bef0130a94a0) is 8M, max 78.2M, 70.2M free. Nov 5 00:11:51.475663 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:11:51.475984 systemd-modules-load[309]: Inserted module 'br_netfilter' Nov 5 00:11:51.478671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:11:51.491022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 00:11:51.578901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:11:51.586482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 00:11:51.590849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:11:51.601508 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:11:51.605939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:11:51.632043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:11:51.636625 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:11:51.639443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:11:51.643200 systemd-tmpfiles[329]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 00:11:51.644001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 00:11:51.650726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:11:51.661749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:11:51.680692 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:11:51.733968 systemd-resolved[345]: Positive Trust Anchors: Nov 5 00:11:51.733988 systemd-resolved[345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:11:51.733994 systemd-resolved[345]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:11:51.734023 systemd-resolved[345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:11:51.855870 systemd-resolved[345]: Defaulting to hostname 'linux'. Nov 5 00:11:51.859604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:11:51.861883 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:11:51.938713 kernel: Loading iSCSI transport class v2.0-870. Nov 5 00:11:51.961698 kernel: iscsi: registered transport (tcp) Nov 5 00:11:52.001614 kernel: iscsi: registered transport (qla4xxx) Nov 5 00:11:52.001807 kernel: QLogic iSCSI HBA Driver Nov 5 00:11:52.049847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:11:52.103793 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:11:52.108385 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:11:52.179396 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 00:11:52.182715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 00:11:52.185985 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 00:11:52.251149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:11:52.258327 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:11:52.303890 systemd-udevd[581]: Using default interface naming scheme 'v257'. Nov 5 00:11:52.320747 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:11:52.326792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 00:11:52.377615 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Nov 5 00:11:52.380058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:11:52.385842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:11:52.436100 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:11:52.441807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:11:52.462956 systemd-networkd[695]: lo: Link UP Nov 5 00:11:52.462970 systemd-networkd[695]: lo: Gained carrier Nov 5 00:11:52.464940 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:11:52.466149 systemd[1]: Reached target network.target - Network. Nov 5 00:11:52.649897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:11:52.657441 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 00:11:52.992722 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 5 00:11:53.091938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:11:53.201213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:11:53.250347 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 00:11:53.248318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:11:53.252077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:11:53.307742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 5 00:11:53.315413 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:11:53.315428 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:11:53.318073 systemd-networkd[695]: eth0: Link UP Nov 5 00:11:53.319500 systemd-networkd[695]: eth0: Gained carrier Nov 5 00:11:53.319512 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:11:53.364673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 5 00:11:53.477650 kernel: AES CTR mode by8 optimization enabled Nov 5 00:11:53.377610 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 5 00:11:53.480879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 00:11:53.482472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:11:53.497083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 00:11:53.499424 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:11:53.500958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:11:53.503111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:11:53.507789 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 00:11:53.511864 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 00:11:53.527219 disk-uuid[834]: Primary Header is updated. Nov 5 00:11:53.527219 disk-uuid[834]: Secondary Entries is updated. Nov 5 00:11:53.527219 disk-uuid[834]: Secondary Header is updated. Nov 5 00:11:53.538752 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:11:54.370769 systemd-networkd[695]: eth0: DHCPv4 address 172.232.14.37/24, gateway 172.232.14.1 acquired from 23.192.120.217 Nov 5 00:11:54.628363 disk-uuid[837]: Warning: The kernel is still using the old partition table. Nov 5 00:11:54.628363 disk-uuid[837]: The new table will be used at the next reboot or after you Nov 5 00:11:54.628363 disk-uuid[837]: run partprobe(8) or kpartx(8) Nov 5 00:11:54.628363 disk-uuid[837]: The operation has completed successfully. Nov 5 00:11:54.641825 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 00:11:54.642026 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 00:11:54.645827 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 00:11:54.695813 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Nov 5 00:11:54.700983 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:11:54.701025 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:11:54.709939 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:11:54.709977 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:11:54.714311 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:11:54.727667 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:11:54.728333 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 00:11:54.732073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 00:11:55.108617 systemd-networkd[695]: eth0: Gained IPv6LL Nov 5 00:11:55.522241 ignition[874]: Ignition 2.22.0 Nov 5 00:11:55.522267 ignition[874]: Stage: fetch-offline Nov 5 00:11:55.522359 ignition[874]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:55.522390 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:55.525577 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:11:55.522541 ignition[874]: parsed url from cmdline: "" Nov 5 00:11:55.522547 ignition[874]: no config URL provided Nov 5 00:11:55.522555 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 00:11:55.522571 ignition[874]: no config at "/usr/lib/ignition/user.ign" Nov 5 00:11:55.530982 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 00:11:55.522578 ignition[874]: failed to fetch config: resource requires networking Nov 5 00:11:55.523140 ignition[874]: Ignition finished successfully Nov 5 00:11:55.724271 ignition[881]: Ignition 2.22.0 Nov 5 00:11:55.724296 ignition[881]: Stage: fetch Nov 5 00:11:55.724435 ignition[881]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:55.724446 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:55.724530 ignition[881]: parsed url from cmdline: "" Nov 5 00:11:55.724537 ignition[881]: no config URL provided Nov 5 00:11:55.724546 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 00:11:55.724559 ignition[881]: no config at "/usr/lib/ignition/user.ign" Nov 5 00:11:55.724598 ignition[881]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 5 00:11:55.825498 ignition[881]: PUT result: OK Nov 5 00:11:55.825606 ignition[881]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 5 00:11:55.932506 ignition[881]: GET result: OK Nov 5 00:11:55.932764 ignition[881]: parsing config with SHA512: fc3219e591a749e2608f014b5e5e2ef4f9ba7f7bc9d48b2e9e6be13aed1f518ce60f2d382169cce5a3d2213d2ff155adadfe16e796bc0c7aeb0732d6146ae2e1 Nov 5 00:11:55.940818 unknown[881]: fetched base config from "system" Nov 5 00:11:55.942065 unknown[881]: fetched base config from "system" Nov 5 00:11:55.942314 ignition[881]: fetch: fetch complete Nov 5 00:11:55.942077 unknown[881]: fetched user config from "akamai" Nov 5 00:11:55.942321 ignition[881]: fetch: fetch passed Nov 5 00:11:55.942368 ignition[881]: Ignition finished successfully Nov 5 00:11:55.947128 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 00:11:55.951132 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 00:11:56.032112 ignition[888]: Ignition 2.22.0 Nov 5 00:11:56.032136 ignition[888]: Stage: kargs Nov 5 00:11:56.032877 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:56.032893 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:56.034434 ignition[888]: kargs: kargs passed Nov 5 00:11:56.034500 ignition[888]: Ignition finished successfully Nov 5 00:11:56.039992 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 00:11:56.043554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 00:11:56.082035 ignition[895]: Ignition 2.22.0 Nov 5 00:11:56.082108 ignition[895]: Stage: disks Nov 5 00:11:56.082302 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:56.082319 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:56.083658 ignition[895]: disks: disks passed Nov 5 00:11:56.087450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 00:11:56.083721 ignition[895]: Ignition finished successfully Nov 5 00:11:56.120058 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 00:11:56.122978 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 00:11:56.124261 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:11:56.126698 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:11:56.128772 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:11:56.136936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 00:11:56.216977 systemd-fsck[904]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 00:11:56.220750 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 00:11:56.226364 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 00:11:56.425773 kernel: EXT4-fs (sda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 5 00:11:56.427930 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 00:11:56.430132 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 00:11:56.434570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:11:56.437751 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 00:11:56.440997 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 00:11:56.442698 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 00:11:56.442740 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:11:56.456517 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 00:11:56.461128 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 00:11:56.469465 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (912) Nov 5 00:11:56.469503 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:11:56.469560 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:11:56.474905 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:11:56.474953 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:11:56.474979 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:11:56.485981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:11:56.599017 initrd-setup-root[936]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 00:11:56.606570 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory Nov 5 00:11:56.612835 initrd-setup-root[950]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 00:11:56.619795 initrd-setup-root[957]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 00:11:56.771456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 00:11:56.774603 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 00:11:56.777825 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 00:11:56.798162 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 00:11:56.803880 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:11:56.841715 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 00:11:56.890187 ignition[1025]: INFO : Ignition 2.22.0 Nov 5 00:11:56.893107 ignition[1025]: INFO : Stage: mount Nov 5 00:11:56.893107 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:56.893107 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:56.896293 ignition[1025]: INFO : mount: mount passed Nov 5 00:11:56.896293 ignition[1025]: INFO : Ignition finished successfully Nov 5 00:11:56.896971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 00:11:56.900744 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 00:11:57.431759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:11:57.475675 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1037) Nov 5 00:11:57.480940 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:11:57.480981 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:11:57.492813 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:11:57.492956 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:11:57.495183 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:11:57.499995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:11:57.693259 ignition[1054]: INFO : Ignition 2.22.0 Nov 5 00:11:57.693259 ignition[1054]: INFO : Stage: files Nov 5 00:11:57.693259 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:11:57.693259 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:11:57.703987 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Nov 5 00:11:57.703987 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 00:11:57.703987 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 00:11:57.707915 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 00:11:57.709519 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 00:11:57.711092 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 00:11:57.711060 unknown[1054]: wrote ssh authorized keys file for user: core Nov 5 00:11:57.713847 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 00:11:57.715800 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 00:11:57.956729 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 00:11:58.236409 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 00:11:58.236409 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:11:58.241110 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:11:58.251961 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 00:11:58.692769 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 00:12:00.510470 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:12:00.510470 ignition[1054]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 00:12:00.515203 ignition[1054]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:12:00.515203 ignition[1054]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:12:00.515203 ignition[1054]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 00:12:00.515203 ignition[1054]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:12:00.532655 ignition[1054]: INFO : files: files passed Nov 5 00:12:00.532655 ignition[1054]: INFO : Ignition finished successfully Nov 5 00:12:00.527035 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 00:12:00.536122 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 00:12:00.545455 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 00:12:00.563836 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 00:12:00.564081 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 00:12:00.608676 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:12:00.608676 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:12:00.613702 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:12:00.615756 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:12:00.617483 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 00:12:00.620378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 00:12:00.687044 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 00:12:00.687260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 00:12:00.690072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 00:12:00.691500 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 00:12:00.694345 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 00:12:00.696815 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 00:12:00.730521 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:12:00.734000 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 00:12:00.770207 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:12:00.770372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:12:00.771622 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:12:00.774221 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 00:12:00.776215 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 00:12:00.776466 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:12:00.779252 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 00:12:00.780920 systemd[1]: Stopped target basic.target - Basic System. Nov 5 00:12:00.782784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 00:12:00.785132 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:12:00.787436 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 00:12:00.789787 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:12:00.791775 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 00:12:00.793815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:12:00.796026 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 00:12:00.798099 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 00:12:00.800081 systemd[1]: Stopped target swap.target - Swaps. Nov 5 00:12:00.802469 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 00:12:00.802720 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:12:00.805369 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:12:00.806900 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:12:00.808749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 00:12:00.808993 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:12:00.810790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 00:12:00.811028 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 00:12:00.813564 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 00:12:00.813793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:12:00.815175 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 00:12:00.815399 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 00:12:00.819843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 00:12:00.823911 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 00:12:00.825982 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 00:12:00.826143 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:12:00.829129 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 00:12:00.829252 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:12:00.834815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 00:12:00.835047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:12:00.853573 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 00:12:00.853767 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 00:12:00.899460 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 00:12:00.906027 ignition[1110]: INFO : Ignition 2.22.0 Nov 5 00:12:00.906027 ignition[1110]: INFO : Stage: umount Nov 5 00:12:00.909283 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:12:00.909283 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:12:00.909283 ignition[1110]: INFO : umount: umount passed Nov 5 00:12:00.909283 ignition[1110]: INFO : Ignition finished successfully Nov 5 00:12:00.911566 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 00:12:00.911797 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 00:12:00.918740 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 00:12:00.919011 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 00:12:00.921948 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 00:12:00.922066 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 00:12:00.926937 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 00:12:00.927094 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 00:12:00.932706 systemd[1]: Stopped target network.target - Network. Nov 5 00:12:00.941281 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 00:12:00.941457 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:12:00.945054 systemd[1]: Stopped target paths.target - Path Units. Nov 5 00:12:00.947666 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 00:12:00.952689 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:12:00.953733 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 00:12:00.955915 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 00:12:00.957792 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 00:12:00.957858 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:12:00.959520 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 00:12:00.959581 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:12:00.961246 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 00:12:00.961341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 00:12:00.963061 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 00:12:00.963138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 00:12:00.965096 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 00:12:00.967114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 00:12:00.969777 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 00:12:00.969920 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 00:12:00.972094 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 00:12:00.972190 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 00:12:00.978518 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 00:12:00.978789 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 00:12:00.986584 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 00:12:00.987977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 00:12:00.988034 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:12:00.991279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 00:12:00.994110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 00:12:00.994195 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:12:01.001433 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:12:01.004195 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 00:12:01.008723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 00:12:01.021019 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 00:12:01.021381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:12:01.022885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 00:12:01.022970 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 00:12:01.035034 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 00:12:01.036525 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:12:01.040173 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 00:12:01.040479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 00:12:01.043798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 00:12:01.043852 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:12:01.045102 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 00:12:01.045210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:12:01.047233 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 00:12:01.047315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 00:12:01.049108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 00:12:01.049175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:12:01.053839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 00:12:01.055472 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 00:12:01.055543 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:12:01.058061 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 00:12:01.058123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:12:01.061136 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 00:12:01.061210 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 00:12:01.063778 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 00:12:01.063879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:12:01.065218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:12:01.065306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:12:01.068779 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 00:12:01.068911 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 00:12:01.082267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 00:12:01.082420 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 00:12:01.085017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 00:12:01.087223 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 00:12:01.114144 systemd[1]: Switching root. Nov 5 00:12:01.160918 systemd-journald[305]: Journal stopped Nov 5 00:12:02.951998 systemd-journald[305]: Received SIGTERM from PID 1 (systemd). Nov 5 00:12:02.952066 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 00:12:02.952098 kernel: SELinux: policy capability open_perms=1 Nov 5 00:12:02.952126 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 00:12:02.952160 kernel: SELinux: policy capability always_check_network=0 Nov 5 00:12:02.952190 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 00:12:02.952212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 00:12:02.952297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 00:12:02.952335 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 00:12:02.952355 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 00:12:02.952416 kernel: audit: type=1403 audit(1762301521.336:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 00:12:02.952443 systemd[1]: Successfully loaded SELinux policy in 105.798ms. Nov 5 00:12:02.952461 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.356ms. Nov 5 00:12:02.952481 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:12:02.952506 systemd[1]: Detected virtualization kvm. Nov 5 00:12:02.952524 systemd[1]: Detected architecture x86-64. Nov 5 00:12:02.952541 systemd[1]: Detected first boot. Nov 5 00:12:02.952560 systemd[1]: Initializing machine ID from random generator. Nov 5 00:12:02.952577 kernel: Guest personality initialized and is inactive Nov 5 00:12:02.952594 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 00:12:02.952615 kernel: Initialized host personality Nov 5 00:12:02.952670 kernel: NET: Registered PF_VSOCK protocol family Nov 5 00:12:02.952698 zram_generator::config[1155]: No configuration found. Nov 5 00:12:02.952718 systemd[1]: Populated /etc with preset unit settings. Nov 5 00:12:02.952736 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 00:12:02.952760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 00:12:02.952779 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 00:12:02.952799 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 00:12:02.952816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 00:12:02.952834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 00:12:02.952854 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 00:12:02.952878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 00:12:02.952896 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 00:12:02.952915 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 00:12:02.952933 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 00:12:02.952952 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:12:02.952970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:12:02.952988 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 00:12:02.953010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 00:12:02.953030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 00:12:02.953054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:12:02.953073 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 00:12:02.953093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:12:02.953112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:12:02.953134 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 00:12:02.953152 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 00:12:02.953170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 00:12:02.953189 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 00:12:02.953207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:12:02.953225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:12:02.953248 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:12:02.953266 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:12:02.953285 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 00:12:02.953302 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 00:12:02.953321 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 00:12:02.953339 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:12:02.953363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:12:02.953381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:12:02.953399 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 00:12:02.953418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 00:12:02.953437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 00:12:02.953459 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 00:12:02.953478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:02.953497 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 00:12:02.953515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 00:12:02.953533 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 00:12:02.953552 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 00:12:02.953574 systemd[1]: Reached target machines.target - Containers. Nov 5 00:12:02.953593 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 00:12:02.953612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:12:02.953658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:12:02.953689 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 00:12:02.953710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:12:02.953750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:12:02.953779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:12:02.953798 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 00:12:02.953817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:12:02.953836 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 00:12:02.953855 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 00:12:02.953873 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 00:12:02.953892 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 00:12:02.953915 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 00:12:02.953935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:12:02.953954 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:12:02.953972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:12:02.954010 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:12:02.954051 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 00:12:02.954148 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 00:12:02.954172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:12:02.954191 kernel: fuse: init (API version 7.41) Nov 5 00:12:02.954211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:02.954231 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 00:12:02.954249 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 00:12:02.954267 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 00:12:02.954290 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 00:12:02.954308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 00:12:02.954326 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 00:12:02.954344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:12:02.954362 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 00:12:02.954380 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 00:12:02.954413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:12:02.954445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:12:02.954465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:12:02.954485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:12:02.954503 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 00:12:02.954559 systemd-journald[1232]: Collecting audit messages is disabled. Nov 5 00:12:02.954606 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 00:12:02.954626 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:12:02.954678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:12:02.954756 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 00:12:02.954800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:12:02.954830 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:12:02.954851 systemd-journald[1232]: Journal started Nov 5 00:12:02.954915 systemd-journald[1232]: Runtime Journal (/run/log/journal/5e6cd1ddf39f480cb19d9bda83b5aef0) is 8M, max 78.2M, 70.2M free. Nov 5 00:12:02.324170 systemd[1]: Queued start job for default target multi-user.target. Nov 5 00:12:02.351886 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 00:12:02.352737 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 00:12:02.964766 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:12:02.971389 kernel: ACPI: bus type drm_connector registered Nov 5 00:12:02.969728 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 00:12:02.974030 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:12:02.974796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:12:02.977051 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 00:12:02.996690 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:12:02.998521 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 00:12:02.999796 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 00:12:02.999946 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:12:03.002461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 00:12:03.003905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:12:03.007844 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 00:12:03.011826 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 00:12:03.013066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:12:03.015826 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 00:12:03.017019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:12:03.021037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:12:03.047932 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 00:12:03.055888 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 00:12:03.124426 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 00:12:03.125958 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 00:12:03.129989 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 00:12:03.135682 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 00:12:03.137709 systemd-journald[1232]: Time spent on flushing to /var/log/journal/5e6cd1ddf39f480cb19d9bda83b5aef0 is 127.778ms for 985 entries. Nov 5 00:12:03.137709 systemd-journald[1232]: System Journal (/var/log/journal/5e6cd1ddf39f480cb19d9bda83b5aef0) is 8M, max 588.1M, 580.1M free. Nov 5 00:12:03.475890 systemd-journald[1232]: Received client request to flush runtime journal. Nov 5 00:12:03.369676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:12:03.378849 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 00:12:03.382769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 00:12:03.435121 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 00:12:03.471939 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 00:12:03.482164 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 00:12:03.522886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:12:03.526126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 00:12:03.530241 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 00:12:03.543748 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 00:12:03.566203 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 5 00:12:03.566844 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 5 00:12:03.580116 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 00:12:03.586695 kernel: loop3: detected capacity change from 0 to 229808 Nov 5 00:12:03.587371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 00:12:03.632855 kernel: loop4: detected capacity change from 0 to 8 Nov 5 00:12:03.659859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 00:12:03.668210 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 00:12:03.668486 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:12:03.675899 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:12:03.692574 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 00:12:03.703214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 00:12:03.784395 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 00:12:03.788677 kernel: loop7: detected capacity change from 0 to 229808 Nov 5 00:12:03.787587 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Nov 5 00:12:03.798564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:12:03.863686 kernel: loop1: detected capacity change from 0 to 8 Nov 5 00:12:03.894591 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 5 00:12:03.923488 (sd-merge)[1303]: Merged extensions into '/usr'. Nov 5 00:12:03.929494 systemd[1]: Reload requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 00:12:03.929616 systemd[1]: Reloading... Nov 5 00:12:04.383691 zram_generator::config[1340]: No configuration found. Nov 5 00:12:04.507131 systemd-resolved[1304]: Positive Trust Anchors: Nov 5 00:12:04.507771 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:12:04.507844 systemd-resolved[1304]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:12:04.507925 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:12:04.534423 systemd-resolved[1304]: Defaulting to hostname 'linux'. Nov 5 00:12:04.771184 systemd[1]: Reloading finished in 840 ms. Nov 5 00:12:04.806731 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 00:12:04.808199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:12:04.809561 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 00:12:04.811016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 00:12:04.831797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:12:04.860353 systemd[1]: Starting ensure-sysext.service... Nov 5 00:12:04.864027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:12:04.872144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:12:04.910145 systemd[1]: Reload requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Nov 5 00:12:04.910230 systemd[1]: Reloading... Nov 5 00:12:04.952275 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 00:12:04.952357 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 00:12:04.954864 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 00:12:04.955308 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 00:12:04.957208 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 00:12:04.957518 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Nov 5 00:12:04.957611 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Nov 5 00:12:04.978017 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:12:04.979884 systemd-tmpfiles[1385]: Skipping /boot Nov 5 00:12:04.982308 systemd-udevd[1386]: Using default interface naming scheme 'v257'. Nov 5 00:12:05.035134 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:12:05.035152 systemd-tmpfiles[1385]: Skipping /boot Nov 5 00:12:05.105672 zram_generator::config[1429]: No configuration found. Nov 5 00:12:05.416841 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 00:12:05.446469 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 00:12:05.447440 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 00:12:05.578673 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 5 00:12:05.579495 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 00:12:05.582020 systemd[1]: Reloading finished in 670 ms. Nov 5 00:12:05.600467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:12:05.605694 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:12:05.647811 kernel: ACPI: button: Power Button [PWRF] Nov 5 00:12:05.678051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:05.685077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:12:05.689790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 00:12:05.691924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:12:05.747251 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 00:12:05.770134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:12:05.779735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:12:05.834750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:12:05.837304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:12:05.880000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:12:05.887738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 00:12:05.982073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:12:06.006548 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 00:12:06.007806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:06.013357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:12:06.013664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:12:06.016458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:12:06.018974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:12:06.081228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:12:06.093108 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:12:06.094179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:12:06.129506 kernel: EDAC MC: Ver: 3.0.0 Nov 5 00:12:06.120476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:06.121110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:12:06.125879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:12:06.130841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:12:06.137820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:12:06.142063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:12:06.143896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:12:06.144722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:12:06.382208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:12:06.384271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:12:06.404979 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 00:12:06.412373 systemd[1]: Finished ensure-sysext.service. Nov 5 00:12:06.424043 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 00:12:06.426099 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 00:12:06.498983 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:12:06.499324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:12:06.506012 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:12:06.506340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:12:06.521453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 00:12:06.524215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 00:12:06.537049 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 00:12:06.540160 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 00:12:06.585617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:12:06.586235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:12:06.599385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:12:06.607207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:12:06.607667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:12:06.609946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:12:06.610220 augenrules[1556]: No rules Nov 5 00:12:06.612080 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:12:06.616948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:12:06.690081 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 00:12:06.858194 systemd-networkd[1524]: lo: Link UP Nov 5 00:12:06.858217 systemd-networkd[1524]: lo: Gained carrier Nov 5 00:12:06.881231 systemd-networkd[1524]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:12:06.881767 systemd-networkd[1524]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:12:06.884099 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 00:12:06.885150 systemd-networkd[1524]: eth0: Link UP Nov 5 00:12:06.885496 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:12:06.886409 systemd-networkd[1524]: eth0: Gained carrier Nov 5 00:12:06.886489 systemd-networkd[1524]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:12:06.886662 systemd[1]: Reached target network.target - Network. Nov 5 00:12:06.888142 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 00:12:06.909263 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 00:12:06.951027 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 00:12:07.052170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:12:07.098292 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 00:12:07.816375 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 00:12:07.826685 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 00:12:07.831577 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 00:12:07.886502 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 00:12:07.887870 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:12:07.889178 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 00:12:07.890296 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 00:12:07.891365 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 00:12:07.893077 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 00:12:07.894198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 00:12:07.895229 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 00:12:07.896407 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 00:12:07.896457 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:12:07.897461 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:12:07.901195 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 00:12:07.905231 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 00:12:07.910456 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 00:12:07.911808 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 00:12:07.912786 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 00:12:07.924712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 00:12:07.926980 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 00:12:07.928921 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 00:12:07.931058 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:12:07.932204 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:12:07.933163 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:12:07.933230 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:12:07.935005 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 00:12:07.938283 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 00:12:07.945592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 00:12:07.952550 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 00:12:07.957867 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 00:12:07.963115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 00:12:07.991508 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 00:12:07.996331 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 00:12:08.004125 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 00:12:08.009545 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 00:12:08.074891 jq[1583]: false Nov 5 00:12:08.228012 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 00:12:08.239967 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 00:12:08.279426 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing passwd entry cache Nov 5 00:12:08.262686 oslogin_cache_refresh[1585]: Refreshing passwd entry cache Nov 5 00:12:08.281523 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 00:12:08.283289 oslogin_cache_refresh[1585]: Failure getting users, quitting Nov 5 00:12:08.289919 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting users, quitting Nov 5 00:12:08.289919 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:12:08.289919 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing group entry cache Nov 5 00:12:08.289919 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting groups, quitting Nov 5 00:12:08.289919 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:12:08.284905 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 00:12:08.283320 oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:12:08.286118 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 00:12:08.283435 oslogin_cache_refresh[1585]: Refreshing group entry cache Nov 5 00:12:08.284336 oslogin_cache_refresh[1585]: Failure getting groups, quitting Nov 5 00:12:08.284350 oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:12:08.293463 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 00:12:08.299524 extend-filesystems[1584]: Found /dev/sda6 Nov 5 00:12:08.325616 extend-filesystems[1584]: Found /dev/sda9 Nov 5 00:12:08.325616 extend-filesystems[1584]: Checking size of /dev/sda9 Nov 5 00:12:08.330168 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 00:12:08.370489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 00:12:08.372473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 00:12:08.373396 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 00:12:08.374029 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 00:12:08.374429 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 00:12:08.405849 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 00:12:08.406400 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 00:12:08.411591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 00:12:08.411966 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 00:12:08.426617 jq[1599]: true Nov 5 00:12:08.465016 systemd-networkd[1524]: eth0: Gained IPv6LL Nov 5 00:12:08.474278 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Nov 5 00:12:08.478303 (ntainerd)[1625]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 00:12:08.536551 extend-filesystems[1584]: Resized partition /dev/sda9 Nov 5 00:12:08.538098 coreos-metadata[1580]: Nov 05 00:12:08.537 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 00:12:08.538856 tar[1611]: linux-amd64/LICENSE Nov 5 00:12:08.540181 tar[1611]: linux-amd64/helm Nov 5 00:12:08.576876 extend-filesystems[1631]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 00:12:08.591888 jq[1621]: true Nov 5 00:12:08.769258 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 5 00:12:08.808663 update_engine[1598]: I20251105 00:12:08.791249 1598 main.cc:92] Flatcar Update Engine starting Nov 5 00:12:08.899326 dbus-daemon[1581]: [system] SELinux support is enabled Nov 5 00:12:08.900622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 00:12:08.911481 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 00:12:08.911573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 00:12:08.916803 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 00:12:08.916849 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 00:12:08.942383 systemd[1]: Started update-engine.service - Update Engine. Nov 5 00:12:08.944105 update_engine[1598]: I20251105 00:12:08.944001 1598 update_check_scheduler.cc:74] Next update check in 5m0s Nov 5 00:12:09.019226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 00:12:09.346055 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 00:12:09.346148 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 00:12:09.417787 systemd-logind[1595]: New seat seat0. Nov 5 00:12:09.419753 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 00:12:09.455117 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 00:12:09.481600 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 00:12:09.486106 systemd-networkd[1524]: eth0: DHCPv4 address 172.232.14.37/24, gateway 172.232.14.1 acquired from 23.192.120.217 Nov 5 00:12:09.491057 dbus-daemon[1581]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1524 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 00:12:09.491831 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 00:12:09.511175 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Nov 5 00:12:09.564496 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 00:12:09.570166 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 00:12:09.575407 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 00:12:09.589484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:12:09.603531 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 00:12:09.611832 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Nov 5 00:12:09.625512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 00:12:09.642937 systemd[1]: Starting sshkeys.service... Nov 5 00:12:09.896887 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 00:12:09.897594 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 00:12:09.911093 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 00:12:09.982034 coreos-metadata[1580]: Nov 05 00:12:09.978 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 5 00:12:10.152713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 00:12:10.211625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 00:12:10.222195 coreos-metadata[1580]: Nov 05 00:12:10.220 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 5 00:12:10.226801 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 00:12:10.234992 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 00:12:10.265263 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 00:12:10.325524 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 00:12:10.350353 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 00:12:10.351848 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 00:12:10.432029 systemd[1]: Started sshd@0-172.232.14.37:22-139.178.68.195:51826.service - OpenSSH per-connection server daemon (139.178.68.195:51826). Nov 5 00:12:10.616775 coreos-metadata[1580]: Nov 05 00:12:10.536 INFO Fetch successful Nov 5 00:12:10.616775 coreos-metadata[1580]: Nov 05 00:12:10.542 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 5 00:12:10.805687 coreos-metadata[1580]: Nov 05 00:12:10.803 INFO Fetch successful Nov 5 00:12:10.853680 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 5 00:12:10.882502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 00:12:10.927514 extend-filesystems[1631]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 00:12:10.927514 extend-filesystems[1631]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 5 00:12:10.927514 extend-filesystems[1631]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 5 00:12:10.963359 extend-filesystems[1584]: Resized filesystem in /dev/sda9 Nov 5 00:12:10.928903 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 00:12:10.929692 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 00:12:11.007786 coreos-metadata[1689]: Nov 05 00:12:10.984 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 00:12:11.032872 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 00:12:11.049477 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 00:12:11.093928 dbus-daemon[1581]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1663 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 00:12:11.122783 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 00:12:11.232013 coreos-metadata[1689]: Nov 05 00:12:11.227 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 5 00:12:11.452301 coreos-metadata[1689]: Nov 05 00:12:11.452 INFO Fetch successful Nov 5 00:12:11.537150 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Nov 5 00:12:11.557092 update-ssh-keys[1717]: Updated "/home/core/.ssh/authorized_keys" Nov 5 00:12:11.621707 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 00:12:11.635768 systemd[1]: Finished sshkeys.service. Nov 5 00:12:11.639803 containerd[1625]: time="2025-11-05T00:12:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 00:12:11.644297 containerd[1625]: time="2025-11-05T00:12:11.644223160Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 00:12:11.685683 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 51826 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:11.680966 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:11.707803 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 00:12:11.712712 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 00:12:11.731009 polkitd[1713]: Started polkitd version 126 Nov 5 00:12:11.741539 polkitd[1713]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 00:12:11.742177 polkitd[1713]: Loading rules from directory /run/polkit-1/rules.d Nov 5 00:12:11.742252 polkitd[1713]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 00:12:11.742497 polkitd[1713]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 00:12:11.742544 polkitd[1713]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 00:12:11.742588 polkitd[1713]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 00:12:11.744732 polkitd[1713]: Finished loading, compiling and executing 2 rules Nov 5 00:12:11.746397 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 00:12:11.747389 polkitd[1713]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 00:12:11.749947 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 00:12:11.753970 systemd-logind[1595]: New session 1 of user core. Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.770571950Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=1.94088ms Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.771305650Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.772423200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.772877350Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.772935830Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773020110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773203020Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773231680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773604450Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773627390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773673730Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:12:11.782892 containerd[1625]: time="2025-11-05T00:12:11.773691090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.773853660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.774372990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.774474710Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.774493190Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.774629410Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.775385980Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 00:12:11.785037 containerd[1625]: time="2025-11-05T00:12:11.775470630Z" level=info msg="metadata content store policy set" policy=shared Nov 5 00:12:11.791021 containerd[1625]: time="2025-11-05T00:12:11.790961540Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 00:12:11.791815 containerd[1625]: time="2025-11-05T00:12:11.791783630Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 00:12:11.792057 containerd[1625]: time="2025-11-05T00:12:11.792026270Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 00:12:11.792214 containerd[1625]: time="2025-11-05T00:12:11.792144480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 00:12:11.792367 containerd[1625]: time="2025-11-05T00:12:11.792337440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 00:12:11.793247 containerd[1625]: time="2025-11-05T00:12:11.793214700Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 00:12:11.793377 containerd[1625]: time="2025-11-05T00:12:11.793351940Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 00:12:11.793728 containerd[1625]: time="2025-11-05T00:12:11.793697830Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 00:12:11.793848 containerd[1625]: time="2025-11-05T00:12:11.793801640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796536140Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796565340Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796676110Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796904190Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796948600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.796977090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797006590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797030170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797049980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797077170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797119200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797154820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797179440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797207330Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 00:12:11.797691 containerd[1625]: time="2025-11-05T00:12:11.797473590Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 00:12:11.797670 systemd-hostnamed[1663]: Hostname set to <172-232-14-37> (transient) Nov 5 00:12:11.798939 containerd[1625]: time="2025-11-05T00:12:11.797506130Z" level=info msg="Start snapshots syncer" Nov 5 00:12:11.798939 containerd[1625]: time="2025-11-05T00:12:11.797569940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 00:12:11.797952 systemd-resolved[1304]: System hostname changed to '172-232-14-37'. Nov 5 00:12:11.802617 containerd[1625]: time="2025-11-05T00:12:11.802008700Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 00:12:11.802617 containerd[1625]: time="2025-11-05T00:12:11.802147900Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 00:12:11.817899 containerd[1625]: time="2025-11-05T00:12:11.803268320Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818201190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818276520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818294970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818401270Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818463520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818485500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818509050Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 00:12:11.818658 containerd[1625]: time="2025-11-05T00:12:11.818589010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.818618390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819056290Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819215960Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819239290Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819620530Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819724530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819894730Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.819916970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820012300Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820192550Z" level=info msg="runtime interface created" Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820209900Z" level=info msg="created NRI interface" Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820258160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820282910Z" level=info msg="Connect containerd service" Nov 5 00:12:11.820606 containerd[1625]: time="2025-11-05T00:12:11.820370820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 00:12:11.825911 containerd[1625]: time="2025-11-05T00:12:11.825885740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 00:12:11.881457 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 00:12:11.905540 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 00:12:12.098280 (systemd)[1741]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 00:12:12.105339 systemd-logind[1595]: New session c1 of user core. Nov 5 00:12:12.260364 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 00:12:12.264287 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 00:12:12.718919 systemd[1741]: Queued start job for default target default.target. Nov 5 00:12:12.927361 systemd[1741]: Created slice app.slice - User Application Slice. Nov 5 00:12:12.929567 systemd[1741]: Reached target paths.target - Paths. Nov 5 00:12:12.929780 systemd[1741]: Reached target timers.target - Timers. Nov 5 00:12:12.950804 systemd[1741]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 00:12:12.970074 systemd[1741]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 00:12:12.971079 systemd[1741]: Reached target sockets.target - Sockets. Nov 5 00:12:12.971281 systemd[1741]: Reached target basic.target - Basic System. Nov 5 00:12:12.971513 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 00:12:12.975444 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 00:12:12.978454 systemd[1741]: Reached target default.target - Main User Target. Nov 5 00:12:12.978565 systemd[1741]: Startup finished in 843ms. Nov 5 00:12:13.421819 containerd[1625]: time="2025-11-05T00:12:13.421219350Z" level=info msg="Start subscribing containerd event" Nov 5 00:12:13.423793 systemd[1]: Started sshd@1-172.232.14.37:22-139.178.68.195:49288.service - OpenSSH per-connection server daemon (139.178.68.195:49288). Nov 5 00:12:13.454443 containerd[1625]: time="2025-11-05T00:12:13.448091100Z" level=info msg="Start recovering state" Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.455818070Z" level=info msg="Start event monitor" Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.455923800Z" level=info msg="Start cni network conf syncer for default" Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.455986920Z" level=info msg="Start streaming server" Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.456102160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.456498050Z" level=info msg="runtime interface starting up..." Nov 5 00:12:13.456671 containerd[1625]: time="2025-11-05T00:12:13.456556000Z" level=info msg="starting plugins..." Nov 5 00:12:13.460715 containerd[1625]: time="2025-11-05T00:12:13.460683460Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 00:12:13.479312 containerd[1625]: time="2025-11-05T00:12:13.479240150Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 00:12:13.480665 containerd[1625]: time="2025-11-05T00:12:13.479873670Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 00:12:13.484014 containerd[1625]: time="2025-11-05T00:12:13.481919240Z" level=info msg="containerd successfully booted in 1.851372s" Nov 5 00:12:13.482131 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 00:12:13.490702 tar[1611]: linux-amd64/README.md Nov 5 00:12:13.561949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 00:12:13.939805 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 49288 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:13.942149 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:13.952582 systemd-logind[1595]: New session 2 of user core. Nov 5 00:12:13.959821 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 00:12:14.196866 sshd[1775]: Connection closed by 139.178.68.195 port 49288 Nov 5 00:12:14.198979 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:14.217043 systemd[1]: sshd@1-172.232.14.37:22-139.178.68.195:49288.service: Deactivated successfully. Nov 5 00:12:14.225114 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 00:12:14.229930 systemd-logind[1595]: Session 2 logged out. Waiting for processes to exit. Nov 5 00:12:14.234099 systemd-logind[1595]: Removed session 2. Nov 5 00:12:14.301738 systemd[1]: Started sshd@2-172.232.14.37:22-139.178.68.195:49300.service - OpenSSH per-connection server daemon (139.178.68.195:49300). Nov 5 00:12:14.668864 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 49300 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:14.668114 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:14.678042 systemd-logind[1595]: New session 3 of user core. Nov 5 00:12:14.685973 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 00:12:14.998733 sshd[1784]: Connection closed by 139.178.68.195 port 49300 Nov 5 00:12:14.999772 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:15.015404 systemd[1]: sshd@2-172.232.14.37:22-139.178.68.195:49300.service: Deactivated successfully. Nov 5 00:12:15.021185 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 00:12:15.025775 systemd-logind[1595]: Session 3 logged out. Waiting for processes to exit. Nov 5 00:12:15.029881 systemd-logind[1595]: Removed session 3. Nov 5 00:12:15.848793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:12:15.850159 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 00:12:15.851601 systemd[1]: Startup finished in 5.607s (kernel) + 10.700s (initrd) + 14.618s (userspace) = 30.926s. Nov 5 00:12:15.863288 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:12:17.764851 kubelet[1798]: E1105 00:12:17.764113 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:12:17.776682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:12:17.777459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:12:17.780715 systemd[1]: kubelet.service: Consumed 4.043s CPU time, 268M memory peak. Nov 5 00:12:25.071307 systemd[1]: Started sshd@3-172.232.14.37:22-139.178.68.195:35414.service - OpenSSH per-connection server daemon (139.178.68.195:35414). Nov 5 00:12:25.448840 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 35414 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:25.451441 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:25.463376 systemd-logind[1595]: New session 4 of user core. Nov 5 00:12:25.471924 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 00:12:25.713085 sshd[1808]: Connection closed by 139.178.68.195 port 35414 Nov 5 00:12:25.715093 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:25.721187 systemd[1]: sshd@3-172.232.14.37:22-139.178.68.195:35414.service: Deactivated successfully. Nov 5 00:12:25.727743 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 00:12:25.731131 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Nov 5 00:12:25.732932 systemd-logind[1595]: Removed session 4. Nov 5 00:12:25.780940 systemd[1]: Started sshd@4-172.232.14.37:22-139.178.68.195:35422.service - OpenSSH per-connection server daemon (139.178.68.195:35422). Nov 5 00:12:26.177809 sshd[1814]: Accepted publickey for core from 139.178.68.195 port 35422 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:26.180072 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:26.190132 systemd-logind[1595]: New session 5 of user core. Nov 5 00:12:26.201971 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 00:12:26.433383 sshd[1817]: Connection closed by 139.178.68.195 port 35422 Nov 5 00:12:26.434623 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:26.440239 systemd[1]: sshd@4-172.232.14.37:22-139.178.68.195:35422.service: Deactivated successfully. Nov 5 00:12:26.443133 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 00:12:26.446405 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Nov 5 00:12:26.447921 systemd-logind[1595]: Removed session 5. Nov 5 00:12:26.502242 systemd[1]: Started sshd@5-172.232.14.37:22-139.178.68.195:35426.service - OpenSSH per-connection server daemon (139.178.68.195:35426). Nov 5 00:12:26.866767 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 35426 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:26.868797 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:26.876160 systemd-logind[1595]: New session 6 of user core. Nov 5 00:12:26.884046 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 00:12:27.123693 sshd[1826]: Connection closed by 139.178.68.195 port 35426 Nov 5 00:12:27.125121 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:27.131484 systemd[1]: sshd@5-172.232.14.37:22-139.178.68.195:35426.service: Deactivated successfully. Nov 5 00:12:27.134107 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 00:12:27.136111 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Nov 5 00:12:27.138232 systemd-logind[1595]: Removed session 6. Nov 5 00:12:27.194237 systemd[1]: Started sshd@6-172.232.14.37:22-139.178.68.195:35434.service - OpenSSH per-connection server daemon (139.178.68.195:35434). Nov 5 00:12:27.561492 sshd[1832]: Accepted publickey for core from 139.178.68.195 port 35434 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:27.564400 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:27.572906 systemd-logind[1595]: New session 7 of user core. Nov 5 00:12:27.582899 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 00:12:27.782675 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 00:12:27.783119 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:12:27.784433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 00:12:27.791531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:12:27.803347 sudo[1836]: pam_unix(sudo:session): session closed for user root Nov 5 00:12:27.855722 sshd[1835]: Connection closed by 139.178.68.195 port 35434 Nov 5 00:12:27.857960 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:27.866153 systemd[1]: sshd@6-172.232.14.37:22-139.178.68.195:35434.service: Deactivated successfully. Nov 5 00:12:27.871060 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 00:12:27.873705 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Nov 5 00:12:27.877598 systemd-logind[1595]: Removed session 7. Nov 5 00:12:27.925970 systemd[1]: Started sshd@7-172.232.14.37:22-139.178.68.195:35436.service - OpenSSH per-connection server daemon (139.178.68.195:35436). Nov 5 00:12:28.286909 sshd[1845]: Accepted publickey for core from 139.178.68.195 port 35436 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:28.291041 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:28.299576 systemd-logind[1595]: New session 8 of user core. Nov 5 00:12:28.309879 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 00:12:28.467973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:12:28.483066 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:12:28.510804 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 00:12:28.511400 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:12:28.532873 sudo[1856]: pam_unix(sudo:session): session closed for user root Nov 5 00:12:28.549148 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 00:12:28.549595 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:12:28.573787 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:12:28.694540 augenrules[1883]: No rules Nov 5 00:12:28.695923 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:12:28.697832 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:12:28.699600 sudo[1855]: pam_unix(sudo:session): session closed for user root Nov 5 00:12:28.757205 sshd[1848]: Connection closed by 139.178.68.195 port 35436 Nov 5 00:12:28.758143 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Nov 5 00:12:28.771726 kubelet[1854]: E1105 00:12:28.770549 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:12:28.775348 systemd[1]: sshd@7-172.232.14.37:22-139.178.68.195:35436.service: Deactivated successfully. Nov 5 00:12:28.780241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:12:28.780563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:12:28.781428 systemd[1]: kubelet.service: Consumed 877ms CPU time, 111M memory peak. Nov 5 00:12:28.782386 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 00:12:28.786570 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Nov 5 00:12:28.789973 systemd-logind[1595]: Removed session 8. Nov 5 00:12:28.822757 systemd[1]: Started sshd@8-172.232.14.37:22-139.178.68.195:35450.service - OpenSSH per-connection server daemon (139.178.68.195:35450). Nov 5 00:12:29.186711 sshd[1893]: Accepted publickey for core from 139.178.68.195 port 35450 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:12:29.188898 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:12:29.209024 systemd-logind[1595]: New session 9 of user core. Nov 5 00:12:29.225896 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 00:12:29.390714 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 00:12:29.391232 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:12:31.722661 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 00:12:31.762246 (dockerd)[1917]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 00:12:33.202540 dockerd[1917]: time="2025-11-05T00:12:33.202264320Z" level=info msg="Starting up" Nov 5 00:12:33.206570 dockerd[1917]: time="2025-11-05T00:12:33.206405230Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 00:12:33.267674 dockerd[1917]: time="2025-11-05T00:12:33.267519070Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 00:12:33.319423 systemd[1]: var-lib-docker-metacopy\x2dcheck2133708766-merged.mount: Deactivated successfully. Nov 5 00:12:33.353977 dockerd[1917]: time="2025-11-05T00:12:33.353324150Z" level=info msg="Loading containers: start." Nov 5 00:12:33.389915 kernel: Initializing XFRM netlink socket Nov 5 00:12:33.792933 systemd-timesyncd[1541]: Network configuration changed, trying to establish connection. Nov 5 00:12:33.878543 systemd-networkd[1524]: docker0: Link UP Nov 5 00:12:33.884889 dockerd[1917]: time="2025-11-05T00:12:33.884749240Z" level=info msg="Loading containers: done." Nov 5 00:12:35.236821 systemd-timesyncd[1541]: Contacted time server [2600:3c00::2000:88ff:feea:9770]:123 (2.flatcar.pool.ntp.org). Nov 5 00:12:35.237591 systemd-resolved[1304]: Clock change detected. Flushing caches. Nov 5 00:12:35.237763 systemd-timesyncd[1541]: Initial clock synchronization to Wed 2025-11-05 00:12:35.234534 UTC. Nov 5 00:12:35.268588 dockerd[1917]: time="2025-11-05T00:12:35.268501506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 00:12:35.268817 dockerd[1917]: time="2025-11-05T00:12:35.268637606Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 00:12:35.268817 dockerd[1917]: time="2025-11-05T00:12:35.268788286Z" level=info msg="Initializing buildkit" Nov 5 00:12:35.304587 dockerd[1917]: time="2025-11-05T00:12:35.304406666Z" level=info msg="Completed buildkit initialization" Nov 5 00:12:35.311567 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 00:12:35.338523 dockerd[1917]: time="2025-11-05T00:12:35.310708756Z" level=info msg="Daemon has completed initialization" Nov 5 00:12:35.339089 dockerd[1917]: time="2025-11-05T00:12:35.338740686Z" level=info msg="API listen on /run/docker.sock" Nov 5 00:12:35.651180 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1158735964-merged.mount: Deactivated successfully. Nov 5 00:12:37.071370 containerd[1625]: time="2025-11-05T00:12:37.070507316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 00:12:38.380652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429901666.mount: Deactivated successfully. Nov 5 00:12:40.335986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 00:12:40.357180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:12:41.185728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:12:41.200096 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:12:41.523274 kubelet[2193]: E1105 00:12:41.520947 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:12:41.533287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:12:41.535102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:12:41.536590 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 108.7M memory peak. Nov 5 00:12:41.970296 containerd[1625]: time="2025-11-05T00:12:41.968804506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:41.977013 containerd[1625]: time="2025-11-05T00:12:41.971623026Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 00:12:41.977013 containerd[1625]: time="2025-11-05T00:12:41.976059356Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:41.984263 containerd[1625]: time="2025-11-05T00:12:41.984168796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:41.986366 containerd[1625]: time="2025-11-05T00:12:41.986146036Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 4.91510131s" Nov 5 00:12:41.986671 containerd[1625]: time="2025-11-05T00:12:41.986625416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 00:12:41.998801 containerd[1625]: time="2025-11-05T00:12:41.998691066Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 00:12:43.187120 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 00:12:45.196600 containerd[1625]: time="2025-11-05T00:12:45.195457346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:45.203624 containerd[1625]: time="2025-11-05T00:12:45.199056926Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 00:12:45.205119 containerd[1625]: time="2025-11-05T00:12:45.204997886Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:45.212149 containerd[1625]: time="2025-11-05T00:12:45.212028706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:45.215267 containerd[1625]: time="2025-11-05T00:12:45.214071316Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.21526202s" Nov 5 00:12:45.215267 containerd[1625]: time="2025-11-05T00:12:45.214406866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 00:12:45.226923 containerd[1625]: time="2025-11-05T00:12:45.226853416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 00:12:48.032519 containerd[1625]: time="2025-11-05T00:12:48.030625356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:48.032519 containerd[1625]: time="2025-11-05T00:12:48.031365996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 00:12:48.037513 containerd[1625]: time="2025-11-05T00:12:48.035730356Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:48.040333 containerd[1625]: time="2025-11-05T00:12:48.040263346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:48.042451 containerd[1625]: time="2025-11-05T00:12:48.041740426Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.81480313s" Nov 5 00:12:48.042451 containerd[1625]: time="2025-11-05T00:12:48.041999576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 00:12:48.047752 containerd[1625]: time="2025-11-05T00:12:48.047699266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 00:12:50.537443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3134328469.mount: Deactivated successfully. Nov 5 00:12:51.566593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 00:12:51.592132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:12:52.211425 containerd[1625]: time="2025-11-05T00:12:52.208838776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:52.214483 containerd[1625]: time="2025-11-05T00:12:52.214378416Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 00:12:52.217845 containerd[1625]: time="2025-11-05T00:12:52.217744386Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:52.222808 containerd[1625]: time="2025-11-05T00:12:52.222714986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:52.224694 containerd[1625]: time="2025-11-05T00:12:52.224628206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.17680921s" Nov 5 00:12:52.225804 containerd[1625]: time="2025-11-05T00:12:52.225215866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 00:12:52.258585 containerd[1625]: time="2025-11-05T00:12:52.258363436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 00:12:52.547964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:12:52.562915 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:12:52.858274 kubelet[2227]: E1105 00:12:52.858029 2227 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:12:52.864906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:12:52.865223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:12:52.867127 systemd[1]: kubelet.service: Consumed 1.267s CPU time, 109.1M memory peak. Nov 5 00:12:53.156204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931284297.mount: Deactivated successfully. Nov 5 00:12:55.132220 containerd[1625]: time="2025-11-05T00:12:55.131845286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:55.136808 containerd[1625]: time="2025-11-05T00:12:55.134343816Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 00:12:55.138289 containerd[1625]: time="2025-11-05T00:12:55.137874106Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:55.141278 containerd[1625]: time="2025-11-05T00:12:55.140433726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:12:55.143000 containerd[1625]: time="2025-11-05T00:12:55.142540276Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.88396978s" Nov 5 00:12:55.143000 containerd[1625]: time="2025-11-05T00:12:55.142783716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 00:12:55.147471 containerd[1625]: time="2025-11-05T00:12:55.147407306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 00:12:55.273532 update_engine[1598]: I20251105 00:12:55.273011 1598 update_attempter.cc:509] Updating boot flags... Nov 5 00:12:56.018188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568657375.mount: Deactivated successfully. Nov 5 00:12:56.025453 containerd[1625]: time="2025-11-05T00:12:56.025354276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:12:56.026585 containerd[1625]: time="2025-11-05T00:12:56.026538036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 00:12:56.027439 containerd[1625]: time="2025-11-05T00:12:56.027349966Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:12:56.029854 containerd[1625]: time="2025-11-05T00:12:56.029781666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:12:56.033265 containerd[1625]: time="2025-11-05T00:12:56.031201156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 883.7451ms" Nov 5 00:12:56.033265 containerd[1625]: time="2025-11-05T00:12:56.031287276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 00:12:56.036759 containerd[1625]: time="2025-11-05T00:12:56.036720156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 00:12:56.740627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133673749.mount: Deactivated successfully. Nov 5 00:13:01.188902 containerd[1625]: time="2025-11-05T00:13:01.188314766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:01.200091 containerd[1625]: time="2025-11-05T00:13:01.191730686Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 00:13:01.200091 containerd[1625]: time="2025-11-05T00:13:01.195265736Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:01.201339 containerd[1625]: time="2025-11-05T00:13:01.200345906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:01.203977 containerd[1625]: time="2025-11-05T00:13:01.203873606Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.16710666s" Nov 5 00:13:01.204131 containerd[1625]: time="2025-11-05T00:13:01.204072316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 00:13:03.063042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 00:13:03.074685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:13:04.095980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:04.123669 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:13:04.339273 kubelet[2394]: E1105 00:13:04.338858 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:13:04.348016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:13:04.348634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:13:04.350362 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 108.2M memory peak. Nov 5 00:13:05.076852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:05.077461 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 108.2M memory peak. Nov 5 00:13:05.088961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:13:05.189112 systemd[1]: Reload requested from client PID 2409 ('systemctl') (unit session-9.scope)... Nov 5 00:13:05.189279 systemd[1]: Reloading... Nov 5 00:13:05.650302 zram_generator::config[2451]: No configuration found. Nov 5 00:13:06.109762 systemd[1]: Reloading finished in 919 ms. Nov 5 00:13:06.205003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 00:13:06.206434 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 00:13:06.207453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:06.208122 systemd[1]: kubelet.service: Consumed 697ms CPU time, 98.4M memory peak. Nov 5 00:13:06.211367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:13:06.671421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:06.689181 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:13:06.820286 kubelet[2508]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:13:06.820286 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:13:06.820286 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:13:06.820286 kubelet[2508]: I1105 00:13:06.819873 2508 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:13:07.038891 kubelet[2508]: I1105 00:13:07.038689 2508 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 00:13:07.038891 kubelet[2508]: I1105 00:13:07.038735 2508 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:13:07.039605 kubelet[2508]: I1105 00:13:07.039554 2508 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 00:13:07.080740 kubelet[2508]: E1105 00:13:07.080647 2508 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.14.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 00:13:07.089179 kubelet[2508]: I1105 00:13:07.088727 2508 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:13:07.111675 kubelet[2508]: I1105 00:13:07.111626 2508 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:13:07.122441 kubelet[2508]: I1105 00:13:07.122409 2508 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:13:07.123473 kubelet[2508]: I1105 00:13:07.123406 2508 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:13:07.123938 kubelet[2508]: I1105 00:13:07.123466 2508 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-14-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:13:07.124501 kubelet[2508]: I1105 00:13:07.124055 2508 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:13:07.124501 kubelet[2508]: I1105 00:13:07.124085 2508 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 00:13:07.124634 kubelet[2508]: I1105 00:13:07.124580 2508 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:13:07.128137 kubelet[2508]: I1105 00:13:07.127841 2508 kubelet.go:480] "Attempting to sync node with API server" Nov 5 00:13:07.128137 kubelet[2508]: I1105 00:13:07.127905 2508 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:13:07.128137 kubelet[2508]: I1105 00:13:07.128127 2508 kubelet.go:386] "Adding apiserver pod source" Nov 5 00:13:07.128398 kubelet[2508]: I1105 00:13:07.128264 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:13:07.135567 kubelet[2508]: E1105 00:13:07.135349 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.14.37:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-14-37&limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 00:13:07.137327 kubelet[2508]: E1105 00:13:07.137070 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.14.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 00:13:07.137600 kubelet[2508]: I1105 00:13:07.137561 2508 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:13:07.138599 kubelet[2508]: I1105 00:13:07.138567 2508 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 00:13:07.139791 kubelet[2508]: W1105 00:13:07.139760 2508 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 00:13:07.148836 kubelet[2508]: I1105 00:13:07.148791 2508 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:13:07.148973 kubelet[2508]: I1105 00:13:07.148944 2508 server.go:1289] "Started kubelet" Nov 5 00:13:07.151575 kubelet[2508]: I1105 00:13:07.150897 2508 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:13:07.154359 kubelet[2508]: I1105 00:13:07.154330 2508 server.go:317] "Adding debug handlers to kubelet server" Nov 5 00:13:07.156169 kubelet[2508]: I1105 00:13:07.154306 2508 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:13:07.157495 kubelet[2508]: I1105 00:13:07.157468 2508 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:13:07.163548 kubelet[2508]: E1105 00:13:07.161877 2508 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.14.37:6443/api/v1/namespaces/default/events\": dial tcp 172.232.14.37:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-14-37.1874f3f4d91d5620 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-14-37,UID:172-232-14-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-14-37,},FirstTimestamp:2025-11-05 00:13:07.148854816 +0000 UTC m=+0.439811661,LastTimestamp:2025-11-05 00:13:07.148854816 +0000 UTC m=+0.439811661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-14-37,}" Nov 5 00:13:07.164015 kubelet[2508]: I1105 00:13:07.163994 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:13:07.171824 kubelet[2508]: I1105 00:13:07.164004 2508 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:13:07.173245 kubelet[2508]: E1105 00:13:07.170962 2508 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-14-37\" not found" Nov 5 00:13:07.173410 kubelet[2508]: I1105 00:13:07.172480 2508 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:13:07.174024 kubelet[2508]: I1105 00:13:07.172654 2508 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:13:07.174945 kubelet[2508]: I1105 00:13:07.174892 2508 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:13:07.175490 kubelet[2508]: E1105 00:13:07.175456 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.14.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 00:13:07.175636 kubelet[2508]: E1105 00:13:07.175589 2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.14.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-14-37?timeout=10s\": dial tcp 172.232.14.37:6443: connect: connection refused" interval="200ms" Nov 5 00:13:07.176145 kubelet[2508]: I1105 00:13:07.176099 2508 factory.go:223] Registration of the systemd container factory successfully Nov 5 00:13:07.177289 kubelet[2508]: I1105 00:13:07.176210 2508 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:13:07.179268 kubelet[2508]: E1105 00:13:07.178956 2508 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:13:07.180636 kubelet[2508]: I1105 00:13:07.180617 2508 factory.go:223] Registration of the containerd container factory successfully Nov 5 00:13:07.211919 kubelet[2508]: I1105 00:13:07.211867 2508 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:13:07.211919 kubelet[2508]: I1105 00:13:07.211896 2508 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:13:07.211919 kubelet[2508]: I1105 00:13:07.211927 2508 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:13:07.213943 kubelet[2508]: I1105 00:13:07.213901 2508 policy_none.go:49] "None policy: Start" Nov 5 00:13:07.214099 kubelet[2508]: I1105 00:13:07.214012 2508 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:13:07.214167 kubelet[2508]: I1105 00:13:07.214108 2508 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:13:07.229709 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 00:13:07.246725 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 00:13:07.248342 kubelet[2508]: I1105 00:13:07.248071 2508 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 00:13:07.251284 kubelet[2508]: I1105 00:13:07.250523 2508 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 00:13:07.251284 kubelet[2508]: I1105 00:13:07.250633 2508 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 00:13:07.251284 kubelet[2508]: I1105 00:13:07.250695 2508 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:13:07.251284 kubelet[2508]: I1105 00:13:07.250775 2508 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 00:13:07.251284 kubelet[2508]: E1105 00:13:07.250887 2508 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:13:07.256407 kubelet[2508]: E1105 00:13:07.256325 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.14.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 00:13:07.273941 kubelet[2508]: E1105 00:13:07.273657 2508 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-14-37\" not found" Nov 5 00:13:07.278650 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 00:13:07.290895 kubelet[2508]: E1105 00:13:07.290766 2508 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 00:13:07.299136 kubelet[2508]: I1105 00:13:07.299107 2508 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:13:07.299381 kubelet[2508]: I1105 00:13:07.299187 2508 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:13:07.311720 kubelet[2508]: I1105 00:13:07.311337 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:13:07.314187 kubelet[2508]: E1105 00:13:07.314139 2508 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:13:07.316462 kubelet[2508]: E1105 00:13:07.316432 2508 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-14-37\" not found" Nov 5 00:13:07.373182 systemd[1]: Created slice kubepods-burstable-podbe31bb5abc23afcf3cf06caf8877b58f.slice - libcontainer container kubepods-burstable-podbe31bb5abc23afcf3cf06caf8877b58f.slice. Nov 5 00:13:07.376296 kubelet[2508]: E1105 00:13:07.376178 2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.14.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-14-37?timeout=10s\": dial tcp 172.232.14.37:6443: connect: connection refused" interval="400ms" Nov 5 00:13:07.378342 kubelet[2508]: I1105 00:13:07.378221 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47b6ef960b09f1b3c04e5eabc4beaf4b-kubeconfig\") pod \"kube-scheduler-172-232-14-37\" (UID: \"47b6ef960b09f1b3c04e5eabc4beaf4b\") " pod="kube-system/kube-scheduler-172-232-14-37" Nov 5 00:13:07.378511 kubelet[2508]: I1105 00:13:07.378354 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-ca-certs\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:07.378511 kubelet[2508]: I1105 00:13:07.378427 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:07.378511 kubelet[2508]: I1105 00:13:07.378498 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-kubeconfig\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:07.378668 kubelet[2508]: I1105 00:13:07.378526 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:07.378668 kubelet[2508]: I1105 00:13:07.378571 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-k8s-certs\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:07.378668 kubelet[2508]: I1105 00:13:07.378609 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-ca-certs\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:07.378668 kubelet[2508]: I1105 00:13:07.378628 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-flexvolume-dir\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:07.378668 kubelet[2508]: I1105 00:13:07.378662 2508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-k8s-certs\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:07.381208 kubelet[2508]: E1105 00:13:07.380846 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:07.384024 systemd[1]: Created slice kubepods-burstable-podcf500f8920d2e2f9117bc1886a592fcb.slice - libcontainer container kubepods-burstable-podcf500f8920d2e2f9117bc1886a592fcb.slice. Nov 5 00:13:07.397751 kubelet[2508]: E1105 00:13:07.397679 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:07.403370 systemd[1]: Created slice kubepods-burstable-pod47b6ef960b09f1b3c04e5eabc4beaf4b.slice - libcontainer container kubepods-burstable-pod47b6ef960b09f1b3c04e5eabc4beaf4b.slice. Nov 5 00:13:07.406857 kubelet[2508]: E1105 00:13:07.406799 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:07.412504 kubelet[2508]: I1105 00:13:07.412460 2508 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:07.413142 kubelet[2508]: E1105 00:13:07.413091 2508 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.14.37:6443/api/v1/nodes\": dial tcp 172.232.14.37:6443: connect: connection refused" node="172-232-14-37" Nov 5 00:13:07.616024 kubelet[2508]: I1105 00:13:07.615979 2508 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:07.616495 kubelet[2508]: E1105 00:13:07.616405 2508 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.14.37:6443/api/v1/nodes\": dial tcp 172.232.14.37:6443: connect: connection refused" node="172-232-14-37" Nov 5 00:13:07.682180 kubelet[2508]: E1105 00:13:07.682113 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:07.684199 containerd[1625]: time="2025-11-05T00:13:07.683947976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-14-37,Uid:be31bb5abc23afcf3cf06caf8877b58f,Namespace:kube-system,Attempt:0,}" Nov 5 00:13:07.699135 kubelet[2508]: E1105 00:13:07.699079 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:07.700512 containerd[1625]: time="2025-11-05T00:13:07.700084376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-14-37,Uid:cf500f8920d2e2f9117bc1886a592fcb,Namespace:kube-system,Attempt:0,}" Nov 5 00:13:07.708928 kubelet[2508]: E1105 00:13:07.708887 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:07.736768 containerd[1625]: time="2025-11-05T00:13:07.724434306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-14-37,Uid:47b6ef960b09f1b3c04e5eabc4beaf4b,Namespace:kube-system,Attempt:0,}" Nov 5 00:13:07.879587 containerd[1625]: time="2025-11-05T00:13:07.877532426Z" level=info msg="connecting to shim 73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb" address="unix:///run/containerd/s/b5717248b23b7a8c0577085d790a35c0c650a8d4867c1159e81243e2fdc48c14" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:07.879746 kubelet[2508]: E1105 00:13:07.879385 2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.14.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-14-37?timeout=10s\": dial tcp 172.232.14.37:6443: connect: connection refused" interval="800ms" Nov 5 00:13:07.898256 containerd[1625]: time="2025-11-05T00:13:07.898142286Z" level=info msg="connecting to shim 1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94" address="unix:///run/containerd/s/6c51443496ab1b093051f5d21513c1d2e1a1bef98720d25d8340596f8d8880c1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:07.934843 containerd[1625]: time="2025-11-05T00:13:07.934785996Z" level=info msg="connecting to shim 66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757" address="unix:///run/containerd/s/02b11a7db6f580a464603afc98575f39258912454de1a2b47d57234b239116b1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:08.020723 kubelet[2508]: I1105 00:13:08.020670 2508 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:08.021460 kubelet[2508]: E1105 00:13:08.021412 2508 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.14.37:6443/api/v1/nodes\": dial tcp 172.232.14.37:6443: connect: connection refused" node="172-232-14-37" Nov 5 00:13:08.027101 systemd[1]: Started cri-containerd-1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94.scope - libcontainer container 1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94. Nov 5 00:13:08.037758 systemd[1]: Started cri-containerd-73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb.scope - libcontainer container 73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb. Nov 5 00:13:08.073424 kubelet[2508]: E1105 00:13:08.073366 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.14.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 00:13:08.111700 kubelet[2508]: E1105 00:13:08.111549 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.14.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 00:13:08.552854 kubelet[2508]: E1105 00:13:08.552001 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.14.37:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-14-37&limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 00:13:08.582797 systemd[1]: Started cri-containerd-66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757.scope - libcontainer container 66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757. Nov 5 00:13:08.603617 containerd[1625]: time="2025-11-05T00:13:08.603465943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-14-37,Uid:be31bb5abc23afcf3cf06caf8877b58f,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb\"" Nov 5 00:13:08.609779 kubelet[2508]: E1105 00:13:08.609640 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:08.631272 containerd[1625]: time="2025-11-05T00:13:08.630901015Z" level=info msg="CreateContainer within sandbox \"73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 00:13:08.655409 containerd[1625]: time="2025-11-05T00:13:08.655343513Z" level=info msg="Container 64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:08.671297 containerd[1625]: time="2025-11-05T00:13:08.669859204Z" level=info msg="CreateContainer within sandbox \"73e699e1d796f2711ba5b88dfcaa117e8ee6c618866fac90e84534f02d818cbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af\"" Nov 5 00:13:08.683190 kubelet[2508]: E1105 00:13:08.683120 2508 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.14.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-14-37?timeout=10s\": dial tcp 172.232.14.37:6443: connect: connection refused" interval="1.6s" Nov 5 00:13:08.684274 containerd[1625]: time="2025-11-05T00:13:08.683662309Z" level=info msg="StartContainer for \"64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af\"" Nov 5 00:13:08.689510 containerd[1625]: time="2025-11-05T00:13:08.689468039Z" level=info msg="connecting to shim 64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af" address="unix:///run/containerd/s/b5717248b23b7a8c0577085d790a35c0c650a8d4867c1159e81243e2fdc48c14" protocol=ttrpc version=3 Nov 5 00:13:08.765284 kubelet[2508]: E1105 00:13:08.765075 2508 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.14.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 00:13:08.776595 systemd[1]: Started cri-containerd-64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af.scope - libcontainer container 64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af. Nov 5 00:13:08.783247 containerd[1625]: time="2025-11-05T00:13:08.783165428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-14-37,Uid:cf500f8920d2e2f9117bc1886a592fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94\"" Nov 5 00:13:08.784843 kubelet[2508]: E1105 00:13:08.784621 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:08.808768 containerd[1625]: time="2025-11-05T00:13:08.806214733Z" level=info msg="CreateContainer within sandbox \"1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 00:13:08.827689 containerd[1625]: time="2025-11-05T00:13:08.827599366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-14-37,Uid:47b6ef960b09f1b3c04e5eabc4beaf4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757\"" Nov 5 00:13:08.833937 kubelet[2508]: I1105 00:13:08.833529 2508 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:08.834036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212204875.mount: Deactivated successfully. Nov 5 00:13:08.834476 kubelet[2508]: E1105 00:13:08.834449 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:08.838268 kubelet[2508]: E1105 00:13:08.836630 2508 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.14.37:6443/api/v1/nodes\": dial tcp 172.232.14.37:6443: connect: connection refused" node="172-232-14-37" Nov 5 00:13:08.846052 containerd[1625]: time="2025-11-05T00:13:08.845802837Z" level=info msg="Container 2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:08.848177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079045520.mount: Deactivated successfully. Nov 5 00:13:08.849831 containerd[1625]: time="2025-11-05T00:13:08.847861075Z" level=info msg="CreateContainer within sandbox \"66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 00:13:08.863905 containerd[1625]: time="2025-11-05T00:13:08.863830572Z" level=info msg="CreateContainer within sandbox \"1495cd71385376b0c242a5637680a9c4bfaf68cb72595fdcb823f4ee912bda94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b\"" Nov 5 00:13:08.867172 containerd[1625]: time="2025-11-05T00:13:08.865572558Z" level=info msg="StartContainer for \"2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b\"" Nov 5 00:13:08.867172 containerd[1625]: time="2025-11-05T00:13:08.867098386Z" level=info msg="connecting to shim 2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b" address="unix:///run/containerd/s/6c51443496ab1b093051f5d21513c1d2e1a1bef98720d25d8340596f8d8880c1" protocol=ttrpc version=3 Nov 5 00:13:08.869048 containerd[1625]: time="2025-11-05T00:13:08.869021369Z" level=info msg="Container 709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:08.883262 containerd[1625]: time="2025-11-05T00:13:08.883188127Z" level=info msg="CreateContainer within sandbox \"66046e9cbfcb473585d2c78fba86a00f834d05fbd590cefa9ab64c7037b5a757\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c\"" Nov 5 00:13:08.884262 containerd[1625]: time="2025-11-05T00:13:08.884182815Z" level=info msg="StartContainer for \"709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c\"" Nov 5 00:13:08.885906 containerd[1625]: time="2025-11-05T00:13:08.885875859Z" level=info msg="connecting to shim 709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c" address="unix:///run/containerd/s/02b11a7db6f580a464603afc98575f39258912454de1a2b47d57234b239116b1" protocol=ttrpc version=3 Nov 5 00:13:08.954585 systemd[1]: Started cri-containerd-2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b.scope - libcontainer container 2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b. Nov 5 00:13:08.979462 systemd[1]: Started cri-containerd-709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c.scope - libcontainer container 709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c. Nov 5 00:13:08.988360 containerd[1625]: time="2025-11-05T00:13:08.988284289Z" level=info msg="StartContainer for \"64dbadfa35a6cab4642f4a9aca966a4992aab3835cbae17808b7f17e1a0bc8af\" returns successfully" Nov 5 00:13:09.175117 kubelet[2508]: E1105 00:13:09.174925 2508 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.14.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.14.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 00:13:09.208521 containerd[1625]: time="2025-11-05T00:13:09.208461880Z" level=info msg="StartContainer for \"2707ebe438204ac0c5c9f3157c6978d2e6e4285bee79cca27e54bd637ae9255b\" returns successfully" Nov 5 00:13:09.212840 containerd[1625]: time="2025-11-05T00:13:09.212778383Z" level=info msg="StartContainer for \"709760668bcc48903e47d5a551d79bd26039edbbe22c279c4967a53768a9856c\" returns successfully" Nov 5 00:13:09.282278 kubelet[2508]: E1105 00:13:09.281653 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:09.285310 kubelet[2508]: E1105 00:13:09.285286 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:09.288850 kubelet[2508]: E1105 00:13:09.288799 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:09.289078 kubelet[2508]: E1105 00:13:09.288954 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:09.292011 kubelet[2508]: E1105 00:13:09.291988 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:09.292149 kubelet[2508]: E1105 00:13:09.292137 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:10.353302 kubelet[2508]: E1105 00:13:10.352666 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:10.357267 kubelet[2508]: E1105 00:13:10.355534 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:10.360212 kubelet[2508]: E1105 00:13:10.360187 2508 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:10.360512 kubelet[2508]: E1105 00:13:10.360493 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:10.442304 kubelet[2508]: I1105 00:13:10.442113 2508 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:12.641583 kubelet[2508]: E1105 00:13:12.641304 2508 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-14-37\" not found" node="172-232-14-37" Nov 5 00:13:12.756970 kubelet[2508]: E1105 00:13:12.756513 2508 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-232-14-37.1874f3f4d91d5620 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-14-37,UID:172-232-14-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-14-37,},FirstTimestamp:2025-11-05 00:13:07.148854816 +0000 UTC m=+0.439811661,LastTimestamp:2025-11-05 00:13:07.148854816 +0000 UTC m=+0.439811661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-14-37,}" Nov 5 00:13:12.821614 kubelet[2508]: I1105 00:13:12.821375 2508 kubelet_node_status.go:78] "Successfully registered node" node="172-232-14-37" Nov 5 00:13:12.821614 kubelet[2508]: E1105 00:13:12.821440 2508 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-232-14-37\": node \"172-232-14-37\" not found" Nov 5 00:13:12.873714 kubelet[2508]: I1105 00:13:12.873648 2508 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:12.978383 kubelet[2508]: E1105 00:13:12.976632 2508 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-14-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:12.978383 kubelet[2508]: I1105 00:13:12.976672 2508 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:12.980885 kubelet[2508]: E1105 00:13:12.980861 2508 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-14-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:12.981611 kubelet[2508]: I1105 00:13:12.981402 2508 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-14-37" Nov 5 00:13:12.984383 kubelet[2508]: E1105 00:13:12.984354 2508 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-14-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-14-37" Nov 5 00:13:13.283020 kubelet[2508]: I1105 00:13:13.282161 2508 apiserver.go:52] "Watching apiserver" Nov 5 00:13:13.374640 kubelet[2508]: I1105 00:13:13.374553 2508 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:13:14.801882 systemd[1]: Reload requested from client PID 2788 ('systemctl') (unit session-9.scope)... Nov 5 00:13:14.801990 systemd[1]: Reloading... Nov 5 00:13:15.092290 zram_generator::config[2835]: No configuration found. Nov 5 00:13:15.436052 systemd[1]: Reloading finished in 633 ms. Nov 5 00:13:15.479865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:13:15.507497 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 00:13:15.508632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:15.508816 systemd[1]: kubelet.service: Consumed 1.500s CPU time, 130.9M memory peak. Nov 5 00:13:15.516370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:13:16.211380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:13:16.233899 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:13:16.397296 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:13:16.397296 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:13:16.397296 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:13:16.397296 kubelet[2883]: I1105 00:13:16.397042 2883 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:13:16.418058 kubelet[2883]: I1105 00:13:16.417899 2883 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 00:13:16.418058 kubelet[2883]: I1105 00:13:16.418003 2883 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:13:16.418999 kubelet[2883]: I1105 00:13:16.418911 2883 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 00:13:16.424491 kubelet[2883]: I1105 00:13:16.424060 2883 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 00:13:16.431853 kubelet[2883]: I1105 00:13:16.430546 2883 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:13:16.456462 kubelet[2883]: I1105 00:13:16.456399 2883 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:13:16.480867 kubelet[2883]: I1105 00:13:16.480727 2883 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:13:16.483290 kubelet[2883]: I1105 00:13:16.482697 2883 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:13:16.483290 kubelet[2883]: I1105 00:13:16.482754 2883 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-14-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:13:16.483290 kubelet[2883]: I1105 00:13:16.483165 2883 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:13:16.483290 kubelet[2883]: I1105 00:13:16.483190 2883 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 00:13:16.487290 kubelet[2883]: I1105 00:13:16.484254 2883 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:13:16.487290 kubelet[2883]: I1105 00:13:16.484966 2883 kubelet.go:480] "Attempting to sync node with API server" Nov 5 00:13:16.487290 kubelet[2883]: I1105 00:13:16.485010 2883 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:13:16.487290 kubelet[2883]: I1105 00:13:16.485132 2883 kubelet.go:386] "Adding apiserver pod source" Nov 5 00:13:16.487607 kubelet[2883]: I1105 00:13:16.485220 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:13:16.494063 kubelet[2883]: I1105 00:13:16.494016 2883 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:13:16.495309 kubelet[2883]: I1105 00:13:16.494787 2883 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 00:13:16.515488 kubelet[2883]: I1105 00:13:16.515399 2883 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:13:16.515627 kubelet[2883]: I1105 00:13:16.515543 2883 server.go:1289] "Started kubelet" Nov 5 00:13:16.527293 kubelet[2883]: I1105 00:13:16.526801 2883 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:13:16.534262 kubelet[2883]: I1105 00:13:16.532523 2883 server.go:317] "Adding debug handlers to kubelet server" Nov 5 00:13:16.547040 kubelet[2883]: I1105 00:13:16.546718 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:13:16.547638 kubelet[2883]: I1105 00:13:16.532952 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:13:16.561554 kubelet[2883]: I1105 00:13:16.560012 2883 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:13:16.577470 kubelet[2883]: I1105 00:13:16.569206 2883 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:13:16.577933 kubelet[2883]: I1105 00:13:16.569836 2883 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:13:16.579145 kubelet[2883]: E1105 00:13:16.570354 2883 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-14-37\" not found" Nov 5 00:13:16.579145 kubelet[2883]: I1105 00:13:16.571429 2883 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:13:16.579145 kubelet[2883]: I1105 00:13:16.578569 2883 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:13:16.580198 kubelet[2883]: I1105 00:13:16.580127 2883 factory.go:223] Registration of the systemd container factory successfully Nov 5 00:13:16.589873 kubelet[2883]: E1105 00:13:16.589487 2883 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:13:16.590532 kubelet[2883]: I1105 00:13:16.590222 2883 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:13:16.595168 kubelet[2883]: I1105 00:13:16.595087 2883 factory.go:223] Registration of the containerd container factory successfully Nov 5 00:13:16.671093 kubelet[2883]: I1105 00:13:16.671005 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 00:13:16.690424 kubelet[2883]: I1105 00:13:16.688851 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 00:13:16.690424 kubelet[2883]: I1105 00:13:16.688925 2883 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 00:13:16.690424 kubelet[2883]: I1105 00:13:16.688977 2883 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:13:16.690424 kubelet[2883]: I1105 00:13:16.688999 2883 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 00:13:16.690424 kubelet[2883]: E1105 00:13:16.689110 2883 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:13:16.790360 kubelet[2883]: E1105 00:13:16.790198 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 00:13:16.799316 kubelet[2883]: I1105 00:13:16.798906 2883 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:13:16.799316 kubelet[2883]: I1105 00:13:16.798925 2883 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:13:16.799316 kubelet[2883]: I1105 00:13:16.798958 2883 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:13:16.799316 kubelet[2883]: I1105 00:13:16.799145 2883 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 00:13:16.799316 kubelet[2883]: I1105 00:13:16.799175 2883 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 00:13:16.800005 kubelet[2883]: I1105 00:13:16.799632 2883 policy_none.go:49] "None policy: Start" Nov 5 00:13:16.800005 kubelet[2883]: I1105 00:13:16.799698 2883 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:13:16.800005 kubelet[2883]: I1105 00:13:16.799752 2883 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:13:16.800005 kubelet[2883]: I1105 00:13:16.799898 2883 state_mem.go:75] "Updated machine memory state" Nov 5 00:13:16.811068 kubelet[2883]: E1105 00:13:16.810189 2883 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 00:13:16.811068 kubelet[2883]: I1105 00:13:16.810526 2883 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:13:16.811068 kubelet[2883]: I1105 00:13:16.810567 2883 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:13:16.814051 kubelet[2883]: I1105 00:13:16.813361 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:13:16.817761 kubelet[2883]: E1105 00:13:16.817738 2883 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:13:16.988468 kubelet[2883]: I1105 00:13:16.988337 2883 kubelet_node_status.go:75] "Attempting to register node" node="172-232-14-37" Nov 5 00:13:16.994523 kubelet[2883]: I1105 00:13:16.994484 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:16.995056 kubelet[2883]: I1105 00:13:16.995027 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-14-37" Nov 5 00:13:16.995469 kubelet[2883]: I1105 00:13:16.994882 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.043042 kubelet[2883]: I1105 00:13:17.042530 2883 kubelet_node_status.go:124] "Node was previously registered" node="172-232-14-37" Nov 5 00:13:17.043042 kubelet[2883]: I1105 00:13:17.042683 2883 kubelet_node_status.go:78] "Successfully registered node" node="172-232-14-37" Nov 5 00:13:17.083716 kubelet[2883]: I1105 00:13:17.083635 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-kubeconfig\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:17.085672 kubelet[2883]: I1105 00:13:17.085495 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47b6ef960b09f1b3c04e5eabc4beaf4b-kubeconfig\") pod \"kube-scheduler-172-232-14-37\" (UID: \"47b6ef960b09f1b3c04e5eabc4beaf4b\") " pod="kube-system/kube-scheduler-172-232-14-37" Nov 5 00:13:17.086593 kubelet[2883]: I1105 00:13:17.086488 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-ca-certs\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.089269 kubelet[2883]: I1105 00:13:17.088746 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-k8s-certs\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.089269 kubelet[2883]: I1105 00:13:17.088780 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-ca-certs\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:17.089269 kubelet[2883]: I1105 00:13:17.088797 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-flexvolume-dir\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:17.089269 kubelet[2883]: I1105 00:13:17.088816 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-k8s-certs\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:17.089269 kubelet[2883]: I1105 00:13:17.088831 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf500f8920d2e2f9117bc1886a592fcb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-14-37\" (UID: \"cf500f8920d2e2f9117bc1886a592fcb\") " pod="kube-system/kube-controller-manager-172-232-14-37" Nov 5 00:13:17.089610 kubelet[2883]: I1105 00:13:17.088931 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be31bb5abc23afcf3cf06caf8877b58f-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-14-37\" (UID: \"be31bb5abc23afcf3cf06caf8877b58f\") " pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.337957 kubelet[2883]: E1105 00:13:17.337767 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:17.340860 kubelet[2883]: E1105 00:13:17.340752 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:17.343019 kubelet[2883]: E1105 00:13:17.342820 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:17.507275 kubelet[2883]: I1105 00:13:17.506686 2883 apiserver.go:52] "Watching apiserver" Nov 5 00:13:17.578981 kubelet[2883]: I1105 00:13:17.578904 2883 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:13:17.785119 kubelet[2883]: E1105 00:13:17.784976 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:17.786320 kubelet[2883]: I1105 00:13:17.785983 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.789159 kubelet[2883]: E1105 00:13:17.788782 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:17.832760 kubelet[2883]: E1105 00:13:17.832426 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-14-37\" already exists" pod="kube-system/kube-apiserver-172-232-14-37" Nov 5 00:13:17.832760 kubelet[2883]: E1105 00:13:17.832651 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:18.079636 kubelet[2883]: I1105 00:13:18.078593 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-14-37" podStartSLOduration=1.07849178 podStartE2EDuration="1.07849178s" podCreationTimestamp="2025-11-05 00:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:13:18.050503723 +0000 UTC m=+1.779846379" watchObservedRunningTime="2025-11-05 00:13:18.07849178 +0000 UTC m=+1.807834436" Nov 5 00:13:18.105671 kubelet[2883]: I1105 00:13:18.105174 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-14-37" podStartSLOduration=1.10512058 podStartE2EDuration="1.10512058s" podCreationTimestamp="2025-11-05 00:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:13:18.081459879 +0000 UTC m=+1.810802525" watchObservedRunningTime="2025-11-05 00:13:18.10512058 +0000 UTC m=+1.834463236" Nov 5 00:13:18.168734 kubelet[2883]: I1105 00:13:18.168644 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-14-37" podStartSLOduration=1.168589065 podStartE2EDuration="1.168589065s" podCreationTimestamp="2025-11-05 00:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:13:18.108190452 +0000 UTC m=+1.837533108" watchObservedRunningTime="2025-11-05 00:13:18.168589065 +0000 UTC m=+1.897931721" Nov 5 00:13:18.791805 kubelet[2883]: E1105 00:13:18.791757 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:18.792616 kubelet[2883]: E1105 00:13:18.792458 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:20.896904 kubelet[2883]: I1105 00:13:20.896625 2883 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 00:13:20.900753 kubelet[2883]: I1105 00:13:20.898052 2883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 00:13:20.900829 containerd[1625]: time="2025-11-05T00:13:20.897577811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 00:13:21.526042 systemd[1]: Created slice kubepods-besteffort-podf8cbc946_0268_4092_8d19_7ce2088da700.slice - libcontainer container kubepods-besteffort-podf8cbc946_0268_4092_8d19_7ce2088da700.slice. Nov 5 00:13:21.569791 kubelet[2883]: I1105 00:13:21.569736 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8cbc946-0268-4092-8d19-7ce2088da700-lib-modules\") pod \"kube-proxy-hcqqj\" (UID: \"f8cbc946-0268-4092-8d19-7ce2088da700\") " pod="kube-system/kube-proxy-hcqqj" Nov 5 00:13:21.570258 kubelet[2883]: I1105 00:13:21.570151 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzh7h\" (UniqueName: \"kubernetes.io/projected/f8cbc946-0268-4092-8d19-7ce2088da700-kube-api-access-wzh7h\") pod \"kube-proxy-hcqqj\" (UID: \"f8cbc946-0268-4092-8d19-7ce2088da700\") " pod="kube-system/kube-proxy-hcqqj" Nov 5 00:13:21.570400 kubelet[2883]: I1105 00:13:21.570365 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8cbc946-0268-4092-8d19-7ce2088da700-kube-proxy\") pod \"kube-proxy-hcqqj\" (UID: \"f8cbc946-0268-4092-8d19-7ce2088da700\") " pod="kube-system/kube-proxy-hcqqj" Nov 5 00:13:21.570531 kubelet[2883]: I1105 00:13:21.570498 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8cbc946-0268-4092-8d19-7ce2088da700-xtables-lock\") pod \"kube-proxy-hcqqj\" (UID: \"f8cbc946-0268-4092-8d19-7ce2088da700\") " pod="kube-system/kube-proxy-hcqqj" Nov 5 00:13:21.682118 kubelet[2883]: E1105 00:13:21.682018 2883 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 00:13:21.682118 kubelet[2883]: E1105 00:13:21.682105 2883 projected.go:194] Error preparing data for projected volume kube-api-access-wzh7h for pod kube-system/kube-proxy-hcqqj: configmap "kube-root-ca.crt" not found Nov 5 00:13:21.683366 kubelet[2883]: E1105 00:13:21.682222 2883 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8cbc946-0268-4092-8d19-7ce2088da700-kube-api-access-wzh7h podName:f8cbc946-0268-4092-8d19-7ce2088da700 nodeName:}" failed. No retries permitted until 2025-11-05 00:13:22.182196683 +0000 UTC m=+5.911539319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzh7h" (UniqueName: "kubernetes.io/projected/f8cbc946-0268-4092-8d19-7ce2088da700-kube-api-access-wzh7h") pod "kube-proxy-hcqqj" (UID: "f8cbc946-0268-4092-8d19-7ce2088da700") : configmap "kube-root-ca.crt" not found Nov 5 00:13:22.156043 systemd[1]: Created slice kubepods-besteffort-podcc5591aa_3d5b_4cf3_ab3a_4c27e1bd2e68.slice - libcontainer container kubepods-besteffort-podcc5591aa_3d5b_4cf3_ab3a_4c27e1bd2e68.slice. Nov 5 00:13:22.277082 kubelet[2883]: I1105 00:13:22.276959 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g9qgn\" (UID: \"cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68\") " pod="tigera-operator/tigera-operator-7dcd859c48-g9qgn" Nov 5 00:13:22.278350 kubelet[2883]: I1105 00:13:22.278317 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4nv6\" (UniqueName: \"kubernetes.io/projected/cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68-kube-api-access-p4nv6\") pod \"tigera-operator-7dcd859c48-g9qgn\" (UID: \"cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68\") " pod="tigera-operator/tigera-operator-7dcd859c48-g9qgn" Nov 5 00:13:22.440979 kubelet[2883]: E1105 00:13:22.440802 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:22.444415 containerd[1625]: time="2025-11-05T00:13:22.444129186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcqqj,Uid:f8cbc946-0268-4092-8d19-7ce2088da700,Namespace:kube-system,Attempt:0,}" Nov 5 00:13:22.467257 containerd[1625]: time="2025-11-05T00:13:22.466598251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g9qgn,Uid:cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68,Namespace:tigera-operator,Attempt:0,}" Nov 5 00:13:22.494522 containerd[1625]: time="2025-11-05T00:13:22.494455100Z" level=info msg="connecting to shim e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b" address="unix:///run/containerd/s/2b79ab877419e7a081b85c06caeb0b1d7081c589c9285c6e8fdb883faead98c6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:22.569436 containerd[1625]: time="2025-11-05T00:13:22.569313461Z" level=info msg="connecting to shim d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402" address="unix:///run/containerd/s/2cb4d302c497094786dfa26bc6630a9bf72139d4e4191c6d7611ca5e037ad4de" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:22.638603 systemd[1]: Started cri-containerd-d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402.scope - libcontainer container d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402. Nov 5 00:13:22.654017 systemd[1]: Started cri-containerd-e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b.scope - libcontainer container e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b. Nov 5 00:13:22.784476 containerd[1625]: time="2025-11-05T00:13:22.784287820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcqqj,Uid:f8cbc946-0268-4092-8d19-7ce2088da700,Namespace:kube-system,Attempt:0,} returns sandbox id \"e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b\"" Nov 5 00:13:22.786421 kubelet[2883]: E1105 00:13:22.786377 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:22.795502 containerd[1625]: time="2025-11-05T00:13:22.795372681Z" level=info msg="CreateContainer within sandbox \"e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 00:13:22.821322 containerd[1625]: time="2025-11-05T00:13:22.821265749Z" level=info msg="Container 089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:22.847118 containerd[1625]: time="2025-11-05T00:13:22.847030515Z" level=info msg="CreateContainer within sandbox \"e617c71d5fb29dab0e7a5d0fd62796218035cca609b6c39e7ac595588ca66d8b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04\"" Nov 5 00:13:22.848300 containerd[1625]: time="2025-11-05T00:13:22.848200093Z" level=info msg="StartContainer for \"089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04\"" Nov 5 00:13:22.853461 containerd[1625]: time="2025-11-05T00:13:22.853054068Z" level=info msg="connecting to shim 089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04" address="unix:///run/containerd/s/2b79ab877419e7a081b85c06caeb0b1d7081c589c9285c6e8fdb883faead98c6" protocol=ttrpc version=3 Nov 5 00:13:22.868025 containerd[1625]: time="2025-11-05T00:13:22.867957107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g9qgn,Uid:cc5591aa-3d5b-4cf3-ab3a-4c27e1bd2e68,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402\"" Nov 5 00:13:22.875197 containerd[1625]: time="2025-11-05T00:13:22.875146168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 00:13:22.891725 systemd[1]: Started cri-containerd-089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04.scope - libcontainer container 089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04. Nov 5 00:13:23.094177 containerd[1625]: time="2025-11-05T00:13:23.094070950Z" level=info msg="StartContainer for \"089f6ab07948a4832548fdc10d000ec7ffb3a2e5ab3cf28319a746c12c874d04\" returns successfully" Nov 5 00:13:23.817164 kubelet[2883]: E1105 00:13:23.817097 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:23.924026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625532720.mount: Deactivated successfully. Nov 5 00:13:24.771431 kubelet[2883]: E1105 00:13:24.770726 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:24.801049 kubelet[2883]: I1105 00:13:24.800901 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcqqj" podStartSLOduration=3.800831841 podStartE2EDuration="3.800831841s" podCreationTimestamp="2025-11-05 00:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:13:23.839372502 +0000 UTC m=+7.568715158" watchObservedRunningTime="2025-11-05 00:13:24.800831841 +0000 UTC m=+8.530174497" Nov 5 00:13:24.826082 kubelet[2883]: E1105 00:13:24.826003 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:24.838370 kubelet[2883]: E1105 00:13:24.838277 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:25.832288 kubelet[2883]: E1105 00:13:25.831873 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:25.984610 containerd[1625]: time="2025-11-05T00:13:25.984458499Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:25.986424 containerd[1625]: time="2025-11-05T00:13:25.986285172Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 00:13:25.987136 containerd[1625]: time="2025-11-05T00:13:25.987105563Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:25.992655 containerd[1625]: time="2025-11-05T00:13:25.992595802Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:25.994821 containerd[1625]: time="2025-11-05T00:13:25.994590287Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.119347158s" Nov 5 00:13:25.994821 containerd[1625]: time="2025-11-05T00:13:25.994640198Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 00:13:26.003049 containerd[1625]: time="2025-11-05T00:13:26.003011543Z" level=info msg="CreateContainer within sandbox \"d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 00:13:26.022224 containerd[1625]: time="2025-11-05T00:13:26.020339149Z" level=info msg="Container 0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:26.034837 containerd[1625]: time="2025-11-05T00:13:26.034801151Z" level=info msg="CreateContainer within sandbox \"d6c5b43e7d328c063b1c6e6bf1ac36ba6cc1fa0b6299b06fa5c7b7659c067402\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d\"" Nov 5 00:13:26.036867 containerd[1625]: time="2025-11-05T00:13:26.036838845Z" level=info msg="StartContainer for \"0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d\"" Nov 5 00:13:26.039557 containerd[1625]: time="2025-11-05T00:13:26.039527117Z" level=info msg="connecting to shim 0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d" address="unix:///run/containerd/s/2cb4d302c497094786dfa26bc6630a9bf72139d4e4191c6d7611ca5e037ad4de" protocol=ttrpc version=3 Nov 5 00:13:26.142957 systemd[1]: Started cri-containerd-0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d.scope - libcontainer container 0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d. Nov 5 00:13:26.254150 containerd[1625]: time="2025-11-05T00:13:26.254057477Z" level=info msg="StartContainer for \"0d6d4270d001df91e717dbd46f55484f83bfe83a257215663b88dd9456d1a97d\" returns successfully" Nov 5 00:13:26.855465 kubelet[2883]: I1105 00:13:26.855354 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g9qgn" podStartSLOduration=1.730734159 podStartE2EDuration="4.855196402s" podCreationTimestamp="2025-11-05 00:13:22 +0000 UTC" firstStartedPulling="2025-11-05 00:13:22.871879097 +0000 UTC m=+6.601221753" lastFinishedPulling="2025-11-05 00:13:25.99634133 +0000 UTC m=+9.725683996" observedRunningTime="2025-11-05 00:13:26.854578115 +0000 UTC m=+10.583920771" watchObservedRunningTime="2025-11-05 00:13:26.855196402 +0000 UTC m=+10.584539058" Nov 5 00:13:27.304702 kubelet[2883]: E1105 00:13:27.304001 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:27.449407 kubelet[2883]: E1105 00:13:27.448994 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:27.846213 kubelet[2883]: E1105 00:13:27.846099 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:27.848426 kubelet[2883]: E1105 00:13:27.847926 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:35.686124 sudo[1897]: pam_unix(sudo:session): session closed for user root Nov 5 00:13:35.756575 sshd[1896]: Connection closed by 139.178.68.195 port 35450 Nov 5 00:13:35.758730 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Nov 5 00:13:35.782016 systemd[1]: sshd@8-172.232.14.37:22-139.178.68.195:35450.service: Deactivated successfully. Nov 5 00:13:35.798089 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 00:13:35.800438 systemd[1]: session-9.scope: Consumed 10.742s CPU time, 233.3M memory peak. Nov 5 00:13:35.804574 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Nov 5 00:13:35.809434 systemd-logind[1595]: Removed session 9. Nov 5 00:13:43.520961 kubelet[2883]: I1105 00:13:43.512316 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjzk\" (UniqueName: \"kubernetes.io/projected/e2bb4a39-0b03-4374-b1b4-352f947a75d8-kube-api-access-zkjzk\") pod \"calico-typha-b7fdb8795-nq7ch\" (UID: \"e2bb4a39-0b03-4374-b1b4-352f947a75d8\") " pod="calico-system/calico-typha-b7fdb8795-nq7ch" Nov 5 00:13:43.520961 kubelet[2883]: I1105 00:13:43.512487 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2bb4a39-0b03-4374-b1b4-352f947a75d8-typha-certs\") pod \"calico-typha-b7fdb8795-nq7ch\" (UID: \"e2bb4a39-0b03-4374-b1b4-352f947a75d8\") " pod="calico-system/calico-typha-b7fdb8795-nq7ch" Nov 5 00:13:43.520961 kubelet[2883]: I1105 00:13:43.512520 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2bb4a39-0b03-4374-b1b4-352f947a75d8-tigera-ca-bundle\") pod \"calico-typha-b7fdb8795-nq7ch\" (UID: \"e2bb4a39-0b03-4374-b1b4-352f947a75d8\") " pod="calico-system/calico-typha-b7fdb8795-nq7ch" Nov 5 00:13:43.550526 systemd[1]: Created slice kubepods-besteffort-pode2bb4a39_0b03_4374_b1b4_352f947a75d8.slice - libcontainer container kubepods-besteffort-pode2bb4a39_0b03_4374_b1b4_352f947a75d8.slice. Nov 5 00:13:43.854379 systemd[1]: Created slice kubepods-besteffort-pod2942ecb1_2306_4433_a773_a4a350b0ff24.slice - libcontainer container kubepods-besteffort-pod2942ecb1_2306_4433_a773_a4a350b0ff24.slice. Nov 5 00:13:43.869798 kubelet[2883]: E1105 00:13:43.869741 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:43.875831 containerd[1625]: time="2025-11-05T00:13:43.875106768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7fdb8795-nq7ch,Uid:e2bb4a39-0b03-4374-b1b4-352f947a75d8,Namespace:calico-system,Attempt:0,}" Nov 5 00:13:43.916482 kubelet[2883]: I1105 00:13:43.916023 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-policysync\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.916482 kubelet[2883]: I1105 00:13:43.916179 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-var-run-calico\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.916482 kubelet[2883]: I1105 00:13:43.916206 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-xtables-lock\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.919336 kubelet[2883]: I1105 00:13:43.917333 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2942ecb1-2306-4433-a773-a4a350b0ff24-node-certs\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.919336 kubelet[2883]: I1105 00:13:43.918694 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-cni-bin-dir\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.919336 kubelet[2883]: I1105 00:13:43.918795 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-cni-net-dir\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922182 kubelet[2883]: I1105 00:13:43.921399 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-flexvol-driver-host\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922182 kubelet[2883]: I1105 00:13:43.921556 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2942ecb1-2306-4433-a773-a4a350b0ff24-tigera-ca-bundle\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922182 kubelet[2883]: I1105 00:13:43.921644 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-lib-modules\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922182 kubelet[2883]: I1105 00:13:43.921771 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-var-lib-calico\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922182 kubelet[2883]: I1105 00:13:43.921916 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vcw\" (UniqueName: \"kubernetes.io/projected/2942ecb1-2306-4433-a773-a4a350b0ff24-kube-api-access-t7vcw\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.922687 kubelet[2883]: I1105 00:13:43.922098 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2942ecb1-2306-4433-a773-a4a350b0ff24-cni-log-dir\") pod \"calico-node-w8l76\" (UID: \"2942ecb1-2306-4433-a773-a4a350b0ff24\") " pod="calico-system/calico-node-w8l76" Nov 5 00:13:43.989263 containerd[1625]: time="2025-11-05T00:13:43.988957280Z" level=info msg="connecting to shim 19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f" address="unix:///run/containerd/s/5a33544ba9f151d8219bc3973160b3ebeb55e8d916800f92176172e0cd6867c0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:44.045549 kubelet[2883]: E1105 00:13:44.044891 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.045549 kubelet[2883]: W1105 00:13:44.044991 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.046782 kubelet[2883]: E1105 00:13:44.046550 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.050704 kubelet[2883]: E1105 00:13:44.050666 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.050820 kubelet[2883]: W1105 00:13:44.050695 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.050820 kubelet[2883]: E1105 00:13:44.050774 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.052737 kubelet[2883]: E1105 00:13:44.051943 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.052737 kubelet[2883]: W1105 00:13:44.051961 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.052737 kubelet[2883]: E1105 00:13:44.051976 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.052737 kubelet[2883]: E1105 00:13:44.052645 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.052737 kubelet[2883]: W1105 00:13:44.052656 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.052737 kubelet[2883]: E1105 00:13:44.052667 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.054148 kubelet[2883]: E1105 00:13:44.054124 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.054148 kubelet[2883]: W1105 00:13:44.054139 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.054148 kubelet[2883]: E1105 00:13:44.054150 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.057064 kubelet[2883]: E1105 00:13:44.056320 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.057064 kubelet[2883]: W1105 00:13:44.056370 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.057064 kubelet[2883]: E1105 00:13:44.056386 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.058791 kubelet[2883]: E1105 00:13:44.058761 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.058791 kubelet[2883]: W1105 00:13:44.058782 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.058949 kubelet[2883]: E1105 00:13:44.058797 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.060259 kubelet[2883]: E1105 00:13:44.059186 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.060259 kubelet[2883]: W1105 00:13:44.059289 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.060259 kubelet[2883]: E1105 00:13:44.059299 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.060259 kubelet[2883]: E1105 00:13:44.060196 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.060259 kubelet[2883]: W1105 00:13:44.060209 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.061081 kubelet[2883]: E1105 00:13:44.060808 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.062090 kubelet[2883]: E1105 00:13:44.062061 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.062174 kubelet[2883]: W1105 00:13:44.062082 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.062174 kubelet[2883]: E1105 00:13:44.062158 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.062991 kubelet[2883]: E1105 00:13:44.062962 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.062991 kubelet[2883]: W1105 00:13:44.062982 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.063158 kubelet[2883]: E1105 00:13:44.062996 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.063761 kubelet[2883]: E1105 00:13:44.063730 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.063761 kubelet[2883]: W1105 00:13:44.063752 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.063946 kubelet[2883]: E1105 00:13:44.063766 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.064806 kubelet[2883]: E1105 00:13:44.064776 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.064806 kubelet[2883]: W1105 00:13:44.064799 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.064948 kubelet[2883]: E1105 00:13:44.064812 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.066260 kubelet[2883]: E1105 00:13:44.066016 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.066260 kubelet[2883]: W1105 00:13:44.066031 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.066260 kubelet[2883]: E1105 00:13:44.066166 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.066938 kubelet[2883]: E1105 00:13:44.066906 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.066938 kubelet[2883]: W1105 00:13:44.066927 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.066938 kubelet[2883]: E1105 00:13:44.066940 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.069347 kubelet[2883]: E1105 00:13:44.068892 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.069347 kubelet[2883]: W1105 00:13:44.068911 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.069347 kubelet[2883]: E1105 00:13:44.068922 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.071060 kubelet[2883]: E1105 00:13:44.070288 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.071060 kubelet[2883]: W1105 00:13:44.070303 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.071060 kubelet[2883]: E1105 00:13:44.070313 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.071060 kubelet[2883]: E1105 00:13:44.071054 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.071060 kubelet[2883]: W1105 00:13:44.071068 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.071550 kubelet[2883]: E1105 00:13:44.071079 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.102110 kubelet[2883]: E1105 00:13:44.101650 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.102110 kubelet[2883]: W1105 00:13:44.101699 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.102110 kubelet[2883]: E1105 00:13:44.101746 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.148902 systemd[1]: Started cri-containerd-19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f.scope - libcontainer container 19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f. Nov 5 00:13:44.163936 kubelet[2883]: E1105 00:13:44.162924 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:44.163936 kubelet[2883]: E1105 00:13:44.163419 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:44.165600 containerd[1625]: time="2025-11-05T00:13:44.165557660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w8l76,Uid:2942ecb1-2306-4433-a773-a4a350b0ff24,Namespace:calico-system,Attempt:0,}" Nov 5 00:13:44.219216 kubelet[2883]: E1105 00:13:44.219166 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.220097 kubelet[2883]: W1105 00:13:44.220061 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.221042 kubelet[2883]: E1105 00:13:44.221014 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.221466 kubelet[2883]: E1105 00:13:44.221452 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.221624 kubelet[2883]: W1105 00:13:44.221607 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.222287 kubelet[2883]: E1105 00:13:44.222267 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.223303 kubelet[2883]: E1105 00:13:44.223286 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.223381 kubelet[2883]: W1105 00:13:44.223367 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.224185 kubelet[2883]: E1105 00:13:44.224162 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.224961 kubelet[2883]: E1105 00:13:44.224944 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.225290 kubelet[2883]: W1105 00:13:44.225272 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.225973 kubelet[2883]: E1105 00:13:44.225348 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.226781 kubelet[2883]: E1105 00:13:44.226765 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.226893 kubelet[2883]: W1105 00:13:44.226875 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.227069 kubelet[2883]: E1105 00:13:44.226998 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.227804 kubelet[2883]: E1105 00:13:44.227785 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.227895 kubelet[2883]: W1105 00:13:44.227878 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.228173 kubelet[2883]: E1105 00:13:44.227978 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.229360 kubelet[2883]: E1105 00:13:44.229217 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.229360 kubelet[2883]: W1105 00:13:44.229258 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.229360 kubelet[2883]: E1105 00:13:44.229273 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.230865 kubelet[2883]: E1105 00:13:44.230790 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.232111 kubelet[2883]: W1105 00:13:44.231027 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.232111 kubelet[2883]: E1105 00:13:44.231046 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.232337 kubelet[2883]: E1105 00:13:44.232310 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.232463 kubelet[2883]: W1105 00:13:44.232425 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.232854 kubelet[2883]: E1105 00:13:44.232723 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.233105 kubelet[2883]: E1105 00:13:44.233089 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.233186 kubelet[2883]: W1105 00:13:44.233172 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.233288 kubelet[2883]: E1105 00:13:44.233254 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.233661 kubelet[2883]: E1105 00:13:44.233604 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.233983 kubelet[2883]: W1105 00:13:44.233837 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.233983 kubelet[2883]: E1105 00:13:44.233860 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.234394 kubelet[2883]: E1105 00:13:44.234151 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.234394 kubelet[2883]: W1105 00:13:44.234162 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.234394 kubelet[2883]: E1105 00:13:44.234173 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.234855 kubelet[2883]: E1105 00:13:44.234786 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.234855 kubelet[2883]: W1105 00:13:44.234800 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.234855 kubelet[2883]: E1105 00:13:44.234810 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.235676 kubelet[2883]: E1105 00:13:44.235580 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.235676 kubelet[2883]: W1105 00:13:44.235596 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.235676 kubelet[2883]: E1105 00:13:44.235610 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.236133 kubelet[2883]: E1105 00:13:44.236042 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.236133 kubelet[2883]: W1105 00:13:44.236055 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.236133 kubelet[2883]: E1105 00:13:44.236065 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.237037 kubelet[2883]: E1105 00:13:44.236949 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.237037 kubelet[2883]: W1105 00:13:44.236963 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.237037 kubelet[2883]: E1105 00:13:44.236977 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.237602 kubelet[2883]: E1105 00:13:44.237523 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.237602 kubelet[2883]: W1105 00:13:44.237543 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.237602 kubelet[2883]: E1105 00:13:44.237554 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.238186 kubelet[2883]: E1105 00:13:44.238007 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.238186 kubelet[2883]: W1105 00:13:44.238027 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.238186 kubelet[2883]: E1105 00:13:44.238049 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.238625 kubelet[2883]: E1105 00:13:44.238604 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.238744 kubelet[2883]: W1105 00:13:44.238723 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.238916 kubelet[2883]: E1105 00:13:44.238826 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.239281 kubelet[2883]: E1105 00:13:44.239260 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.239454 kubelet[2883]: W1105 00:13:44.239347 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.239454 kubelet[2883]: E1105 00:13:44.239366 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.242070 kubelet[2883]: E1105 00:13:44.242053 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.242629 kubelet[2883]: W1105 00:13:44.242603 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.242760 kubelet[2883]: E1105 00:13:44.242737 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.242883 kubelet[2883]: I1105 00:13:44.242858 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpsls\" (UniqueName: \"kubernetes.io/projected/14de6d4c-7243-4b75-9a89-9c47bcb946c9-kube-api-access-cpsls\") pod \"csi-node-driver-rhc65\" (UID: \"14de6d4c-7243-4b75-9a89-9c47bcb946c9\") " pod="calico-system/csi-node-driver-rhc65" Nov 5 00:13:44.243876 kubelet[2883]: E1105 00:13:44.243853 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.244013 kubelet[2883]: W1105 00:13:44.243975 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.244013 kubelet[2883]: E1105 00:13:44.243996 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.245329 kubelet[2883]: E1105 00:13:44.245279 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.245329 kubelet[2883]: W1105 00:13:44.245295 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.245329 kubelet[2883]: E1105 00:13:44.245306 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.247323 kubelet[2883]: E1105 00:13:44.247274 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.247323 kubelet[2883]: W1105 00:13:44.247292 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.247323 kubelet[2883]: E1105 00:13:44.247304 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.248094 kubelet[2883]: I1105 00:13:44.247779 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14de6d4c-7243-4b75-9a89-9c47bcb946c9-kubelet-dir\") pod \"csi-node-driver-rhc65\" (UID: \"14de6d4c-7243-4b75-9a89-9c47bcb946c9\") " pod="calico-system/csi-node-driver-rhc65" Nov 5 00:13:44.248374 kubelet[2883]: E1105 00:13:44.248329 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.248374 kubelet[2883]: W1105 00:13:44.248345 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.248374 kubelet[2883]: E1105 00:13:44.248357 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.250007 kubelet[2883]: E1105 00:13:44.249493 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.250007 kubelet[2883]: W1105 00:13:44.249510 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.250217 kubelet[2883]: E1105 00:13:44.249522 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.250935 kubelet[2883]: E1105 00:13:44.250865 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.250935 kubelet[2883]: W1105 00:13:44.250882 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.250935 kubelet[2883]: E1105 00:13:44.250894 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.251555 kubelet[2883]: I1105 00:13:44.251335 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/14de6d4c-7243-4b75-9a89-9c47bcb946c9-varrun\") pod \"csi-node-driver-rhc65\" (UID: \"14de6d4c-7243-4b75-9a89-9c47bcb946c9\") " pod="calico-system/csi-node-driver-rhc65" Nov 5 00:13:44.252221 kubelet[2883]: E1105 00:13:44.252202 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.252338 kubelet[2883]: W1105 00:13:44.252321 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.252429 kubelet[2883]: E1105 00:13:44.252408 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.252578 kubelet[2883]: I1105 00:13:44.252538 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/14de6d4c-7243-4b75-9a89-9c47bcb946c9-socket-dir\") pod \"csi-node-driver-rhc65\" (UID: \"14de6d4c-7243-4b75-9a89-9c47bcb946c9\") " pod="calico-system/csi-node-driver-rhc65" Nov 5 00:13:44.253516 kubelet[2883]: E1105 00:13:44.253458 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.253516 kubelet[2883]: W1105 00:13:44.253475 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.253516 kubelet[2883]: E1105 00:13:44.253487 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.254806 kubelet[2883]: E1105 00:13:44.254722 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.254806 kubelet[2883]: W1105 00:13:44.254736 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.254806 kubelet[2883]: E1105 00:13:44.254747 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.255727 kubelet[2883]: E1105 00:13:44.255358 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.255727 kubelet[2883]: W1105 00:13:44.255385 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.255727 kubelet[2883]: E1105 00:13:44.255398 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.256014 kubelet[2883]: I1105 00:13:44.255991 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/14de6d4c-7243-4b75-9a89-9c47bcb946c9-registration-dir\") pod \"csi-node-driver-rhc65\" (UID: \"14de6d4c-7243-4b75-9a89-9c47bcb946c9\") " pod="calico-system/csi-node-driver-rhc65" Nov 5 00:13:44.256707 kubelet[2883]: E1105 00:13:44.256668 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.256707 kubelet[2883]: W1105 00:13:44.256682 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.256707 kubelet[2883]: E1105 00:13:44.256693 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.257982 kubelet[2883]: E1105 00:13:44.257813 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.257982 kubelet[2883]: W1105 00:13:44.257835 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.257982 kubelet[2883]: E1105 00:13:44.257856 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.259336 kubelet[2883]: E1105 00:13:44.259259 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.259336 kubelet[2883]: W1105 00:13:44.259276 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.259336 kubelet[2883]: E1105 00:13:44.259287 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.262119 kubelet[2883]: E1105 00:13:44.262057 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.262119 kubelet[2883]: W1105 00:13:44.262079 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.262119 kubelet[2883]: E1105 00:13:44.262091 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.367625 kubelet[2883]: E1105 00:13:44.366925 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.367625 kubelet[2883]: W1105 00:13:44.367139 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.367625 kubelet[2883]: E1105 00:13:44.367305 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.369878 kubelet[2883]: E1105 00:13:44.369409 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.369878 kubelet[2883]: W1105 00:13:44.369480 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.370091 kubelet[2883]: E1105 00:13:44.370059 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.372114 kubelet[2883]: E1105 00:13:44.372086 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.372114 kubelet[2883]: W1105 00:13:44.372109 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.373659 kubelet[2883]: E1105 00:13:44.372127 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.373659 kubelet[2883]: E1105 00:13:44.373618 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.373659 kubelet[2883]: W1105 00:13:44.373633 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.374256 kubelet[2883]: E1105 00:13:44.373653 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.375464 kubelet[2883]: E1105 00:13:44.375430 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.375786 kubelet[2883]: W1105 00:13:44.375454 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.375870 kubelet[2883]: E1105 00:13:44.375818 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.378749 kubelet[2883]: E1105 00:13:44.378713 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.378749 kubelet[2883]: W1105 00:13:44.378740 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.378886 kubelet[2883]: E1105 00:13:44.378755 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.379073 kubelet[2883]: E1105 00:13:44.379027 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.379073 kubelet[2883]: W1105 00:13:44.379050 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.379073 kubelet[2883]: E1105 00:13:44.379063 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.381011 kubelet[2883]: E1105 00:13:44.380965 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.381011 kubelet[2883]: W1105 00:13:44.380991 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.381011 kubelet[2883]: E1105 00:13:44.381007 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.382197 kubelet[2883]: E1105 00:13:44.382163 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.382197 kubelet[2883]: W1105 00:13:44.382186 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.382197 kubelet[2883]: E1105 00:13:44.382203 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.384325 kubelet[2883]: E1105 00:13:44.384296 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.384325 kubelet[2883]: W1105 00:13:44.384318 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.384444 kubelet[2883]: E1105 00:13:44.384333 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.384664 kubelet[2883]: E1105 00:13:44.384635 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.384664 kubelet[2883]: W1105 00:13:44.384660 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.384749 kubelet[2883]: E1105 00:13:44.384676 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.387529 kubelet[2883]: E1105 00:13:44.387453 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.387529 kubelet[2883]: W1105 00:13:44.387476 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.387529 kubelet[2883]: E1105 00:13:44.387491 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.391459 kubelet[2883]: E1105 00:13:44.391312 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.391459 kubelet[2883]: W1105 00:13:44.391329 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.391459 kubelet[2883]: E1105 00:13:44.391342 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.392987 kubelet[2883]: E1105 00:13:44.392377 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.392987 kubelet[2883]: W1105 00:13:44.392392 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.392987 kubelet[2883]: E1105 00:13:44.392403 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.393448 kubelet[2883]: E1105 00:13:44.393406 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.393448 kubelet[2883]: W1105 00:13:44.393432 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.393448 kubelet[2883]: E1105 00:13:44.393447 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.394016 containerd[1625]: time="2025-11-05T00:13:44.393885019Z" level=info msg="connecting to shim 04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d" address="unix:///run/containerd/s/32e2f0e88903c3601cdc29ee42df16bca6d67912a46f3fbf373ac1712698205a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:13:44.394858 kubelet[2883]: E1105 00:13:44.394828 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.394858 kubelet[2883]: W1105 00:13:44.394849 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.394976 kubelet[2883]: E1105 00:13:44.394897 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.396426 kubelet[2883]: E1105 00:13:44.396369 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.396426 kubelet[2883]: W1105 00:13:44.396385 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.396426 kubelet[2883]: E1105 00:13:44.396397 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.399328 kubelet[2883]: E1105 00:13:44.399269 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.399328 kubelet[2883]: W1105 00:13:44.399286 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.399328 kubelet[2883]: E1105 00:13:44.399298 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.415816 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.423420 kubelet[2883]: W1105 00:13:44.415884 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.415965 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.416782 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.423420 kubelet[2883]: W1105 00:13:44.416798 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.416815 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.417246 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.423420 kubelet[2883]: W1105 00:13:44.417276 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.417292 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.423420 kubelet[2883]: E1105 00:13:44.417646 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.424087 kubelet[2883]: W1105 00:13:44.417660 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.417673 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.417994 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.424087 kubelet[2883]: W1105 00:13:44.418004 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.418014 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.418494 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.424087 kubelet[2883]: W1105 00:13:44.418507 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.418520 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.424087 kubelet[2883]: E1105 00:13:44.418897 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.424087 kubelet[2883]: W1105 00:13:44.418908 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.424567 kubelet[2883]: E1105 00:13:44.418917 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.497084 kubelet[2883]: E1105 00:13:44.497051 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:44.497317 kubelet[2883]: W1105 00:13:44.497296 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:44.497441 kubelet[2883]: E1105 00:13:44.497412 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:44.617612 systemd[1]: Started cri-containerd-04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d.scope - libcontainer container 04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d. Nov 5 00:13:44.774842 containerd[1625]: time="2025-11-05T00:13:44.774582955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b7fdb8795-nq7ch,Uid:e2bb4a39-0b03-4374-b1b4-352f947a75d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f\"" Nov 5 00:13:44.776915 kubelet[2883]: E1105 00:13:44.776463 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:44.780945 containerd[1625]: time="2025-11-05T00:13:44.780905799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 00:13:44.828182 containerd[1625]: time="2025-11-05T00:13:44.828085134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w8l76,Uid:2942ecb1-2306-4433-a773-a4a350b0ff24,Namespace:calico-system,Attempt:0,} returns sandbox id \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\"" Nov 5 00:13:44.830420 kubelet[2883]: E1105 00:13:44.830389 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:45.696495 kubelet[2883]: E1105 00:13:45.696343 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:45.739528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911656391.mount: Deactivated successfully. Nov 5 00:13:47.695765 kubelet[2883]: E1105 00:13:47.690209 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:48.942795 containerd[1625]: time="2025-11-05T00:13:48.942530504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:48.944205 containerd[1625]: time="2025-11-05T00:13:48.944140918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 00:13:48.945071 containerd[1625]: time="2025-11-05T00:13:48.945024851Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:48.948603 containerd[1625]: time="2025-11-05T00:13:48.948548541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:48.950140 containerd[1625]: time="2025-11-05T00:13:48.949325133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.168219954s" Nov 5 00:13:48.950140 containerd[1625]: time="2025-11-05T00:13:48.949366403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 00:13:48.952224 containerd[1625]: time="2025-11-05T00:13:48.952193351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 00:13:49.004315 containerd[1625]: time="2025-11-05T00:13:49.004254430Z" level=info msg="CreateContainer within sandbox \"19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 00:13:49.020877 containerd[1625]: time="2025-11-05T00:13:49.018111207Z" level=info msg="Container 91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:49.026573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522307681.mount: Deactivated successfully. Nov 5 00:13:49.032602 containerd[1625]: time="2025-11-05T00:13:49.032539866Z" level=info msg="CreateContainer within sandbox \"19ca73b0cdde80022448e703ed7f551d00262ef530cd6db5fed722dfcf02474f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104\"" Nov 5 00:13:49.035831 containerd[1625]: time="2025-11-05T00:13:49.035761815Z" level=info msg="StartContainer for \"91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104\"" Nov 5 00:13:49.040246 containerd[1625]: time="2025-11-05T00:13:49.039587795Z" level=info msg="connecting to shim 91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104" address="unix:///run/containerd/s/5a33544ba9f151d8219bc3973160b3ebeb55e8d916800f92176172e0cd6867c0" protocol=ttrpc version=3 Nov 5 00:13:49.138140 systemd[1]: Started cri-containerd-91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104.scope - libcontainer container 91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104. Nov 5 00:13:49.352785 containerd[1625]: time="2025-11-05T00:13:49.352529858Z" level=info msg="StartContainer for \"91689d11319c78736dd1611460254e68cb51c83066a731fba9dd730dde7c8104\" returns successfully" Nov 5 00:13:49.694873 kubelet[2883]: E1105 00:13:49.694251 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:49.876812 containerd[1625]: time="2025-11-05T00:13:49.876533200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:49.882592 containerd[1625]: time="2025-11-05T00:13:49.882553316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 00:13:49.883352 containerd[1625]: time="2025-11-05T00:13:49.883315708Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:49.894167 containerd[1625]: time="2025-11-05T00:13:49.894111227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:49.899437 containerd[1625]: time="2025-11-05T00:13:49.899385042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 947.157851ms" Nov 5 00:13:49.899528 containerd[1625]: time="2025-11-05T00:13:49.899441312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 00:13:49.936380 containerd[1625]: time="2025-11-05T00:13:49.936154971Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 00:13:50.002872 containerd[1625]: time="2025-11-05T00:13:50.000445414Z" level=info msg="Container 542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:50.008321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968078257.mount: Deactivated successfully. Nov 5 00:13:50.021260 containerd[1625]: time="2025-11-05T00:13:50.021194196Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\"" Nov 5 00:13:50.022390 containerd[1625]: time="2025-11-05T00:13:50.022353769Z" level=info msg="StartContainer for \"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\"" Nov 5 00:13:50.029880 containerd[1625]: time="2025-11-05T00:13:50.029835368Z" level=info msg="connecting to shim 542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb" address="unix:///run/containerd/s/32e2f0e88903c3601cdc29ee42df16bca6d67912a46f3fbf373ac1712698205a" protocol=ttrpc version=3 Nov 5 00:13:50.239410 systemd[1]: Started cri-containerd-542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb.scope - libcontainer container 542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb. Nov 5 00:13:50.396802 kubelet[2883]: E1105 00:13:50.363806 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:50.396802 kubelet[2883]: I1105 00:13:50.389617 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b7fdb8795-nq7ch" podStartSLOduration=3.218104204 podStartE2EDuration="7.389538867s" podCreationTimestamp="2025-11-05 00:13:43 +0000 UTC" firstStartedPulling="2025-11-05 00:13:44.780412487 +0000 UTC m=+28.509755143" lastFinishedPulling="2025-11-05 00:13:48.95184715 +0000 UTC m=+32.681189806" observedRunningTime="2025-11-05 00:13:50.388948595 +0000 UTC m=+34.118291251" watchObservedRunningTime="2025-11-05 00:13:50.389538867 +0000 UTC m=+34.118881523" Nov 5 00:13:50.441142 kubelet[2883]: E1105 00:13:50.440595 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.441487 kubelet[2883]: W1105 00:13:50.440927 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.441975 kubelet[2883]: E1105 00:13:50.441823 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.442957 kubelet[2883]: E1105 00:13:50.442908 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.442957 kubelet[2883]: W1105 00:13:50.442925 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.443340 kubelet[2883]: E1105 00:13:50.443123 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.444407 kubelet[2883]: E1105 00:13:50.444348 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.444407 kubelet[2883]: W1105 00:13:50.444379 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.444635 kubelet[2883]: E1105 00:13:50.444408 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.445505 kubelet[2883]: E1105 00:13:50.445480 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.445505 kubelet[2883]: W1105 00:13:50.445499 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.445750 kubelet[2883]: E1105 00:13:50.445516 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.447269 kubelet[2883]: E1105 00:13:50.447200 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.447269 kubelet[2883]: W1105 00:13:50.447223 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.447563 kubelet[2883]: E1105 00:13:50.447282 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.447815 kubelet[2883]: E1105 00:13:50.447774 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.447815 kubelet[2883]: W1105 00:13:50.447803 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.447815 kubelet[2883]: E1105 00:13:50.447818 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.448308 kubelet[2883]: E1105 00:13:50.448283 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.448308 kubelet[2883]: W1105 00:13:50.448300 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.448440 kubelet[2883]: E1105 00:13:50.448313 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.449551 kubelet[2883]: E1105 00:13:50.449479 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.449551 kubelet[2883]: W1105 00:13:50.449499 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.449551 kubelet[2883]: E1105 00:13:50.449510 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.450071 kubelet[2883]: E1105 00:13:50.449815 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.450071 kubelet[2883]: W1105 00:13:50.449829 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.450071 kubelet[2883]: E1105 00:13:50.449840 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.450314 kubelet[2883]: E1105 00:13:50.450107 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.450314 kubelet[2883]: W1105 00:13:50.450118 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.450314 kubelet[2883]: E1105 00:13:50.450128 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.450851 kubelet[2883]: E1105 00:13:50.450829 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.450851 kubelet[2883]: W1105 00:13:50.450847 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.450927 kubelet[2883]: E1105 00:13:50.450858 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.453353 kubelet[2883]: E1105 00:13:50.453317 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.453353 kubelet[2883]: W1105 00:13:50.453352 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.453491 kubelet[2883]: E1105 00:13:50.453368 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.454401 kubelet[2883]: E1105 00:13:50.454322 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.454401 kubelet[2883]: W1105 00:13:50.454341 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.454401 kubelet[2883]: E1105 00:13:50.454354 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.454880 kubelet[2883]: E1105 00:13:50.454743 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.454880 kubelet[2883]: W1105 00:13:50.454754 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.454880 kubelet[2883]: E1105 00:13:50.454881 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.455389 kubelet[2883]: E1105 00:13:50.455331 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.455389 kubelet[2883]: W1105 00:13:50.455353 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.455389 kubelet[2883]: E1105 00:13:50.455367 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.458438 kubelet[2883]: E1105 00:13:50.458405 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.458438 kubelet[2883]: W1105 00:13:50.458425 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.458438 kubelet[2883]: E1105 00:13:50.458438 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.459266 kubelet[2883]: E1105 00:13:50.459204 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.459431 kubelet[2883]: W1105 00:13:50.459386 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.459581 kubelet[2883]: E1105 00:13:50.459551 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.461462 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.463269 kubelet[2883]: W1105 00:13:50.461483 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.461498 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.462605 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.463269 kubelet[2883]: W1105 00:13:50.462616 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.462627 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.463161 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.463269 kubelet[2883]: W1105 00:13:50.463171 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.463269 kubelet[2883]: E1105 00:13:50.463182 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.464353 kubelet[2883]: E1105 00:13:50.464320 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.464353 kubelet[2883]: W1105 00:13:50.464341 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.464353 kubelet[2883]: E1105 00:13:50.464357 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.464795 kubelet[2883]: E1105 00:13:50.464648 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.464795 kubelet[2883]: W1105 00:13:50.464672 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.464795 kubelet[2883]: E1105 00:13:50.464683 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.466829 kubelet[2883]: E1105 00:13:50.466649 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.466984 kubelet[2883]: W1105 00:13:50.466958 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.467114 kubelet[2883]: E1105 00:13:50.467094 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.467579 kubelet[2883]: E1105 00:13:50.467560 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.467699 kubelet[2883]: W1105 00:13:50.467678 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.467851 kubelet[2883]: E1105 00:13:50.467777 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.468332 kubelet[2883]: E1105 00:13:50.468315 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.468516 kubelet[2883]: W1105 00:13:50.468497 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.468612 kubelet[2883]: E1105 00:13:50.468593 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.469300 kubelet[2883]: E1105 00:13:50.469025 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.469300 kubelet[2883]: W1105 00:13:50.469042 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.469300 kubelet[2883]: E1105 00:13:50.469057 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.469641 kubelet[2883]: E1105 00:13:50.469617 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.469957 kubelet[2883]: W1105 00:13:50.469931 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.470083 kubelet[2883]: E1105 00:13:50.470062 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.470750 kubelet[2883]: E1105 00:13:50.470682 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.470750 kubelet[2883]: W1105 00:13:50.470698 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.471099 kubelet[2883]: E1105 00:13:50.470709 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.471536 kubelet[2883]: E1105 00:13:50.471515 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.471611 kubelet[2883]: W1105 00:13:50.471595 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.471745 kubelet[2883]: E1105 00:13:50.471701 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.472288 kubelet[2883]: E1105 00:13:50.472270 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.472432 kubelet[2883]: W1105 00:13:50.472388 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.472432 kubelet[2883]: E1105 00:13:50.472413 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.473394 kubelet[2883]: E1105 00:13:50.473353 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.473394 kubelet[2883]: W1105 00:13:50.473368 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.473394 kubelet[2883]: E1105 00:13:50.473379 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.473899 kubelet[2883]: E1105 00:13:50.473874 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.475702 kubelet[2883]: W1105 00:13:50.475593 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.475702 kubelet[2883]: E1105 00:13:50.475613 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.476247 kubelet[2883]: E1105 00:13:50.476159 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 00:13:50.476247 kubelet[2883]: W1105 00:13:50.476172 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 00:13:50.476247 kubelet[2883]: E1105 00:13:50.476183 2883 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 00:13:50.502777 containerd[1625]: time="2025-11-05T00:13:50.502730242Z" level=info msg="StartContainer for \"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\" returns successfully" Nov 5 00:13:50.565026 systemd[1]: cri-containerd-542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb.scope: Deactivated successfully. Nov 5 00:13:50.571826 containerd[1625]: time="2025-11-05T00:13:50.571767027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\" id:\"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\" pid:3541 exited_at:{seconds:1762301630 nanos:569902922}" Nov 5 00:13:50.572223 containerd[1625]: time="2025-11-05T00:13:50.572151198Z" level=info msg="received exit event container_id:\"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\" id:\"542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb\" pid:3541 exited_at:{seconds:1762301630 nanos:569902922}" Nov 5 00:13:50.659016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-542e78988dceceb914d96842009105a594458ab28f94d3d7e64bd8d3a5a9defb-rootfs.mount: Deactivated successfully. Nov 5 00:13:51.369013 kubelet[2883]: E1105 00:13:51.368347 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:51.371871 containerd[1625]: time="2025-11-05T00:13:51.370777167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 00:13:51.690555 kubelet[2883]: E1105 00:13:51.689914 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:53.694386 kubelet[2883]: E1105 00:13:53.693288 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:55.932990 kubelet[2883]: E1105 00:13:55.931403 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:57.690388 kubelet[2883]: E1105 00:13:57.690210 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:13:58.603359 containerd[1625]: time="2025-11-05T00:13:58.603031360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:58.607108 containerd[1625]: time="2025-11-05T00:13:58.605414833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 00:13:58.607507 containerd[1625]: time="2025-11-05T00:13:58.607471076Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:58.612317 containerd[1625]: time="2025-11-05T00:13:58.611762373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:13:58.613560 containerd[1625]: time="2025-11-05T00:13:58.613091325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.242216278s" Nov 5 00:13:58.613560 containerd[1625]: time="2025-11-05T00:13:58.613207765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 00:13:58.625764 containerd[1625]: time="2025-11-05T00:13:58.625695704Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 00:13:58.647907 containerd[1625]: time="2025-11-05T00:13:58.647857277Z" level=info msg="Container 636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:13:58.665315 containerd[1625]: time="2025-11-05T00:13:58.665189993Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\"" Nov 5 00:13:58.670705 containerd[1625]: time="2025-11-05T00:13:58.668395568Z" level=info msg="StartContainer for \"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\"" Nov 5 00:13:58.679584 containerd[1625]: time="2025-11-05T00:13:58.679388205Z" level=info msg="connecting to shim 636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e" address="unix:///run/containerd/s/32e2f0e88903c3601cdc29ee42df16bca6d67912a46f3fbf373ac1712698205a" protocol=ttrpc version=3 Nov 5 00:13:58.811752 systemd[1]: Started cri-containerd-636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e.scope - libcontainer container 636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e. Nov 5 00:13:59.214268 containerd[1625]: time="2025-11-05T00:13:59.214066550Z" level=info msg="StartContainer for \"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\" returns successfully" Nov 5 00:13:59.440357 kubelet[2883]: E1105 00:13:59.439138 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:13:59.692470 kubelet[2883]: E1105 00:13:59.692059 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:00.442743 kubelet[2883]: E1105 00:14:00.442697 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:01.694199 kubelet[2883]: E1105 00:14:01.690184 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:02.251273 kubelet[2883]: I1105 00:14:02.251006 2883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 00:14:02.257196 kubelet[2883]: E1105 00:14:02.254286 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:02.455017 kubelet[2883]: E1105 00:14:02.454966 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:02.748342 systemd[1]: cri-containerd-636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e.scope: Deactivated successfully. Nov 5 00:14:02.751195 systemd[1]: cri-containerd-636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e.scope: Consumed 3.804s CPU time, 192.8M memory peak, 171.3M written to disk. Nov 5 00:14:02.754120 containerd[1625]: time="2025-11-05T00:14:02.753930676Z" level=info msg="received exit event container_id:\"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\" id:\"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\" pid:3636 exited_at:{seconds:1762301642 nanos:750355822}" Nov 5 00:14:02.755892 containerd[1625]: time="2025-11-05T00:14:02.754098426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\" id:\"636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e\" pid:3636 exited_at:{seconds:1762301642 nanos:750355822}" Nov 5 00:14:02.806922 kubelet[2883]: I1105 00:14:02.806675 2883 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 00:14:02.815268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-636528fda0a2070efa4f82aab27f9e7feee995ed769ab9168dd5d8217384b98e-rootfs.mount: Deactivated successfully. Nov 5 00:14:02.922286 systemd[1]: Created slice kubepods-burstable-pod0ceef31c_66d4_4e84_87fa_b07ef462b872.slice - libcontainer container kubepods-burstable-pod0ceef31c_66d4_4e84_87fa_b07ef462b872.slice. Nov 5 00:14:02.940539 systemd[1]: Created slice kubepods-burstable-pod39f0296c_eb04_4fb2_8eac_b5af134b840e.slice - libcontainer container kubepods-burstable-pod39f0296c_eb04_4fb2_8eac_b5af134b840e.slice. Nov 5 00:14:02.959286 systemd[1]: Created slice kubepods-besteffort-poddca6511f_77a2_4cca_9f19_2aca1b8d75e8.slice - libcontainer container kubepods-besteffort-poddca6511f_77a2_4cca_9f19_2aca1b8d75e8.slice. Nov 5 00:14:02.973873 systemd[1]: Created slice kubepods-besteffort-pod346c021e_f948_4f90_b480_e046118d7005.slice - libcontainer container kubepods-besteffort-pod346c021e_f948_4f90_b480_e046118d7005.slice. Nov 5 00:14:02.991001 systemd[1]: Created slice kubepods-besteffort-pod78cc6732_2ab7_4966_83c9_5b3b3e112a51.slice - libcontainer container kubepods-besteffort-pod78cc6732_2ab7_4966_83c9_5b3b3e112a51.slice. Nov 5 00:14:03.002535 systemd[1]: Created slice kubepods-besteffort-pod6ee5090e_a223_462e_845a_5c7f9446afa1.slice - libcontainer container kubepods-besteffort-pod6ee5090e_a223_462e_845a_5c7f9446afa1.slice. Nov 5 00:14:03.013841 systemd[1]: Created slice kubepods-besteffort-pod395087e7_e090_49ab_8705_bb6b55aa5776.slice - libcontainer container kubepods-besteffort-pod395087e7_e090_49ab_8705_bb6b55aa5776.slice. Nov 5 00:14:03.086703 kubelet[2883]: I1105 00:14:03.086633 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6ee5090e-a223-462e-845a-5c7f9446afa1-goldmane-key-pair\") pod \"goldmane-666569f655-kf7jk\" (UID: \"6ee5090e-a223-462e-845a-5c7f9446afa1\") " pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.086971 kubelet[2883]: I1105 00:14:03.086711 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbcnd\" (UniqueName: \"kubernetes.io/projected/78cc6732-2ab7-4966-83c9-5b3b3e112a51-kube-api-access-lbcnd\") pod \"calico-apiserver-5b875bb7d7-9tw6k\" (UID: \"78cc6732-2ab7-4966-83c9-5b3b3e112a51\") " pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" Nov 5 00:14:03.086971 kubelet[2883]: I1105 00:14:03.086766 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39f0296c-eb04-4fb2-8eac-b5af134b840e-config-volume\") pod \"coredns-674b8bbfcf-9d74h\" (UID: \"39f0296c-eb04-4fb2-8eac-b5af134b840e\") " pod="kube-system/coredns-674b8bbfcf-9d74h" Nov 5 00:14:03.086971 kubelet[2883]: I1105 00:14:03.086794 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-backend-key-pair\") pod \"whisker-69bc57565d-dw7qq\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " pod="calico-system/whisker-69bc57565d-dw7qq" Nov 5 00:14:03.086971 kubelet[2883]: I1105 00:14:03.086812 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxll\" (UniqueName: \"kubernetes.io/projected/395087e7-e090-49ab-8705-bb6b55aa5776-kube-api-access-hrxll\") pod \"whisker-69bc57565d-dw7qq\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " pod="calico-system/whisker-69bc57565d-dw7qq" Nov 5 00:14:03.086971 kubelet[2883]: I1105 00:14:03.086845 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2899\" (UniqueName: \"kubernetes.io/projected/dca6511f-77a2-4cca-9f19-2aca1b8d75e8-kube-api-access-l2899\") pod \"calico-kube-controllers-666f7c64f9-pjzbv\" (UID: \"dca6511f-77a2-4cca-9f19-2aca1b8d75e8\") " pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" Nov 5 00:14:03.087324 kubelet[2883]: I1105 00:14:03.086877 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wd4p\" (UniqueName: \"kubernetes.io/projected/346c021e-f948-4f90-b480-e046118d7005-kube-api-access-9wd4p\") pod \"calico-apiserver-5b875bb7d7-78sgr\" (UID: \"346c021e-f948-4f90-b480-e046118d7005\") " pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" Nov 5 00:14:03.087324 kubelet[2883]: I1105 00:14:03.086915 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-ca-bundle\") pod \"whisker-69bc57565d-dw7qq\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " pod="calico-system/whisker-69bc57565d-dw7qq" Nov 5 00:14:03.087324 kubelet[2883]: I1105 00:14:03.086936 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca6511f-77a2-4cca-9f19-2aca1b8d75e8-tigera-ca-bundle\") pod \"calico-kube-controllers-666f7c64f9-pjzbv\" (UID: \"dca6511f-77a2-4cca-9f19-2aca1b8d75e8\") " pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" Nov 5 00:14:03.087324 kubelet[2883]: I1105 00:14:03.086983 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ceef31c-66d4-4e84-87fa-b07ef462b872-config-volume\") pod \"coredns-674b8bbfcf-bstvr\" (UID: \"0ceef31c-66d4-4e84-87fa-b07ef462b872\") " pod="kube-system/coredns-674b8bbfcf-bstvr" Nov 5 00:14:03.087324 kubelet[2883]: I1105 00:14:03.087100 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phnr\" (UniqueName: \"kubernetes.io/projected/6ee5090e-a223-462e-845a-5c7f9446afa1-kube-api-access-5phnr\") pod \"goldmane-666569f655-kf7jk\" (UID: \"6ee5090e-a223-462e-845a-5c7f9446afa1\") " pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.087584 kubelet[2883]: I1105 00:14:03.087144 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkk8d\" (UniqueName: \"kubernetes.io/projected/39f0296c-eb04-4fb2-8eac-b5af134b840e-kube-api-access-wkk8d\") pod \"coredns-674b8bbfcf-9d74h\" (UID: \"39f0296c-eb04-4fb2-8eac-b5af134b840e\") " pod="kube-system/coredns-674b8bbfcf-9d74h" Nov 5 00:14:03.087584 kubelet[2883]: I1105 00:14:03.087212 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ee5090e-a223-462e-845a-5c7f9446afa1-config\") pod \"goldmane-666569f655-kf7jk\" (UID: \"6ee5090e-a223-462e-845a-5c7f9446afa1\") " pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.088841 kubelet[2883]: I1105 00:14:03.088800 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtl5\" (UniqueName: \"kubernetes.io/projected/0ceef31c-66d4-4e84-87fa-b07ef462b872-kube-api-access-crtl5\") pod \"coredns-674b8bbfcf-bstvr\" (UID: \"0ceef31c-66d4-4e84-87fa-b07ef462b872\") " pod="kube-system/coredns-674b8bbfcf-bstvr" Nov 5 00:14:03.089150 kubelet[2883]: I1105 00:14:03.089023 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78cc6732-2ab7-4966-83c9-5b3b3e112a51-calico-apiserver-certs\") pod \"calico-apiserver-5b875bb7d7-9tw6k\" (UID: \"78cc6732-2ab7-4966-83c9-5b3b3e112a51\") " pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" Nov 5 00:14:03.089212 kubelet[2883]: I1105 00:14:03.089196 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/346c021e-f948-4f90-b480-e046118d7005-calico-apiserver-certs\") pod \"calico-apiserver-5b875bb7d7-78sgr\" (UID: \"346c021e-f948-4f90-b480-e046118d7005\") " pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" Nov 5 00:14:03.089331 kubelet[2883]: I1105 00:14:03.089223 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ee5090e-a223-462e-845a-5c7f9446afa1-goldmane-ca-bundle\") pod \"goldmane-666569f655-kf7jk\" (UID: \"6ee5090e-a223-462e-845a-5c7f9446afa1\") " pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.304366 containerd[1625]: time="2025-11-05T00:14:03.304122314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-9tw6k,Uid:78cc6732-2ab7-4966-83c9-5b3b3e112a51,Namespace:calico-apiserver,Attempt:0,}" Nov 5 00:14:03.312273 containerd[1625]: time="2025-11-05T00:14:03.311662802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kf7jk,Uid:6ee5090e-a223-462e-845a-5c7f9446afa1,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:03.338276 containerd[1625]: time="2025-11-05T00:14:03.336901950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bc57565d-dw7qq,Uid:395087e7-e090-49ab-8705-bb6b55aa5776,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:03.534282 kubelet[2883]: E1105 00:14:03.533511 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:03.534484 containerd[1625]: time="2025-11-05T00:14:03.534431206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bstvr,Uid:0ceef31c-66d4-4e84-87fa-b07ef462b872,Namespace:kube-system,Attempt:0,}" Nov 5 00:14:03.545498 kubelet[2883]: E1105 00:14:03.545223 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:03.549649 kubelet[2883]: E1105 00:14:03.549618 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:03.550551 containerd[1625]: time="2025-11-05T00:14:03.550504173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9d74h,Uid:39f0296c-eb04-4fb2-8eac-b5af134b840e,Namespace:kube-system,Attempt:0,}" Nov 5 00:14:03.563263 containerd[1625]: time="2025-11-05T00:14:03.562130876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 00:14:03.570497 containerd[1625]: time="2025-11-05T00:14:03.570446455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f7c64f9-pjzbv,Uid:dca6511f-77a2-4cca-9f19-2aca1b8d75e8,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:03.582832 containerd[1625]: time="2025-11-05T00:14:03.582785508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-78sgr,Uid:346c021e-f948-4f90-b480-e046118d7005,Namespace:calico-apiserver,Attempt:0,}" Nov 5 00:14:03.739313 systemd[1]: Created slice kubepods-besteffort-pod14de6d4c_7243_4b75_9a89_9c47bcb946c9.slice - libcontainer container kubepods-besteffort-pod14de6d4c_7243_4b75_9a89_9c47bcb946c9.slice. Nov 5 00:14:03.791724 containerd[1625]: time="2025-11-05T00:14:03.791638346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhc65,Uid:14de6d4c-7243-4b75-9a89-9c47bcb946c9,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:03.984268 containerd[1625]: time="2025-11-05T00:14:03.981742884Z" level=error msg="Failed to destroy network for sandbox \"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:03.987273 containerd[1625]: time="2025-11-05T00:14:03.986723809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kf7jk,Uid:6ee5090e-a223-462e-845a-5c7f9446afa1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:03.986995 systemd[1]: run-netns-cni\x2d02f64c79\x2d068f\x2d4a41\x2dd275\x2d86baffbe9586.mount: Deactivated successfully. Nov 5 00:14:03.991623 kubelet[2883]: E1105 00:14:03.988738 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:03.991623 kubelet[2883]: E1105 00:14:03.988887 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.991623 kubelet[2883]: E1105 00:14:03.988976 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kf7jk" Nov 5 00:14:03.993476 kubelet[2883]: E1105 00:14:03.989046 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ef204541c52e7ace52e93cd8686f6b3ac23c6751d8d11454fcb4e4e5dbee7ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:14:04.132739 containerd[1625]: time="2025-11-05T00:14:04.132663199Z" level=error msg="Failed to destroy network for sandbox \"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.140133 systemd[1]: run-netns-cni\x2dca084dd6\x2d0377\x2dae49\x2de19b\x2d316ffb6fa96e.mount: Deactivated successfully. Nov 5 00:14:04.143524 containerd[1625]: time="2025-11-05T00:14:04.143353490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69bc57565d-dw7qq,Uid:395087e7-e090-49ab-8705-bb6b55aa5776,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.147617 kubelet[2883]: E1105 00:14:04.146836 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.147617 kubelet[2883]: E1105 00:14:04.146908 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69bc57565d-dw7qq" Nov 5 00:14:04.147617 kubelet[2883]: E1105 00:14:04.146934 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69bc57565d-dw7qq" Nov 5 00:14:04.147912 kubelet[2883]: E1105 00:14:04.146998 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69bc57565d-dw7qq_calico-system(395087e7-e090-49ab-8705-bb6b55aa5776)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69bc57565d-dw7qq_calico-system(395087e7-e090-49ab-8705-bb6b55aa5776)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d56ac6b54eacc6eb2860fc504f86460fe805d5282a78b93299a86f870a8ab40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69bc57565d-dw7qq" podUID="395087e7-e090-49ab-8705-bb6b55aa5776" Nov 5 00:14:04.149704 containerd[1625]: time="2025-11-05T00:14:04.149624447Z" level=error msg="Failed to destroy network for sandbox \"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.156935 systemd[1]: run-netns-cni\x2dba3ca28c\x2d0deb\x2de6b5\x2df38f\x2d6bde058125f4.mount: Deactivated successfully. Nov 5 00:14:04.163708 containerd[1625]: time="2025-11-05T00:14:04.163649861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-9tw6k,Uid:78cc6732-2ab7-4966-83c9-5b3b3e112a51,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.164587 kubelet[2883]: E1105 00:14:04.164091 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.164587 kubelet[2883]: E1105 00:14:04.164185 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" Nov 5 00:14:04.164587 kubelet[2883]: E1105 00:14:04.164296 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" Nov 5 00:14:04.165081 kubelet[2883]: E1105 00:14:04.164547 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba843c0391c23730fe7c66c710956c2d71a907125ff254b1df61cd75986756a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:04.203857 containerd[1625]: time="2025-11-05T00:14:04.203790312Z" level=error msg="Failed to destroy network for sandbox \"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.205373 containerd[1625]: time="2025-11-05T00:14:04.205323604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9d74h,Uid:39f0296c-eb04-4fb2-8eac-b5af134b840e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.205690 kubelet[2883]: E1105 00:14:04.205652 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.205960 kubelet[2883]: E1105 00:14:04.205839 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9d74h" Nov 5 00:14:04.205960 kubelet[2883]: E1105 00:14:04.205899 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9d74h" Nov 5 00:14:04.207466 kubelet[2883]: E1105 00:14:04.207387 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9d74h_kube-system(39f0296c-eb04-4fb2-8eac-b5af134b840e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9d74h_kube-system(39f0296c-eb04-4fb2-8eac-b5af134b840e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15e29c2907041f6b6ea58f662e05bd2bc7812a4cd89d0a065328d44e5e2fad7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9d74h" podUID="39f0296c-eb04-4fb2-8eac-b5af134b840e" Nov 5 00:14:04.231502 containerd[1625]: time="2025-11-05T00:14:04.231311220Z" level=error msg="Failed to destroy network for sandbox \"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.233415 containerd[1625]: time="2025-11-05T00:14:04.232791302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f7c64f9-pjzbv,Uid:dca6511f-77a2-4cca-9f19-2aca1b8d75e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.233655 kubelet[2883]: E1105 00:14:04.233599 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.233850 kubelet[2883]: E1105 00:14:04.233806 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" Nov 5 00:14:04.233952 kubelet[2883]: E1105 00:14:04.233930 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" Nov 5 00:14:04.237458 kubelet[2883]: E1105 00:14:04.234700 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eae50f9db89e7f4aa6abbaa6915a1f2d0b65f97f3d29d48993f8a73512e001ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:14:04.240824 containerd[1625]: time="2025-11-05T00:14:04.240749380Z" level=error msg="Failed to destroy network for sandbox \"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.245519 containerd[1625]: time="2025-11-05T00:14:04.245450895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-78sgr,Uid:346c021e-f948-4f90-b480-e046118d7005,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.245850 kubelet[2883]: E1105 00:14:04.245764 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.246135 kubelet[2883]: E1105 00:14:04.245946 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" Nov 5 00:14:04.246135 kubelet[2883]: E1105 00:14:04.246042 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" Nov 5 00:14:04.247853 kubelet[2883]: E1105 00:14:04.247759 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be843d700eff3aab80afb775847eaa10c7adfbcb5f20bb2dcfc3303a38216596\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:04.257958 containerd[1625]: time="2025-11-05T00:14:04.257894747Z" level=error msg="Failed to destroy network for sandbox \"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.259648 containerd[1625]: time="2025-11-05T00:14:04.259341449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bstvr,Uid:0ceef31c-66d4-4e84-87fa-b07ef462b872,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.259808 kubelet[2883]: E1105 00:14:04.259733 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.259808 kubelet[2883]: E1105 00:14:04.259801 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bstvr" Nov 5 00:14:04.259960 kubelet[2883]: E1105 00:14:04.259829 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bstvr" Nov 5 00:14:04.259960 kubelet[2883]: E1105 00:14:04.259881 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bstvr_kube-system(0ceef31c-66d4-4e84-87fa-b07ef462b872)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bstvr_kube-system(0ceef31c-66d4-4e84-87fa-b07ef462b872)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"820b29cb2f6b14ccb3443099121c397797d4982b048c8ac3d988a66afb0a9149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bstvr" podUID="0ceef31c-66d4-4e84-87fa-b07ef462b872" Nov 5 00:14:04.271896 containerd[1625]: time="2025-11-05T00:14:04.271782342Z" level=error msg="Failed to destroy network for sandbox \"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.274526 containerd[1625]: time="2025-11-05T00:14:04.274203584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhc65,Uid:14de6d4c-7243-4b75-9a89-9c47bcb946c9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.274938 kubelet[2883]: E1105 00:14:04.274784 2883 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 00:14:04.274938 kubelet[2883]: E1105 00:14:04.274888 2883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rhc65" Nov 5 00:14:04.275402 kubelet[2883]: E1105 00:14:04.274959 2883 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rhc65" Nov 5 00:14:04.275402 kubelet[2883]: E1105 00:14:04.275056 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5fc6c4484a8305100f4f64c8672f0700422a4ee024e73241df9643bc99b3c5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:04.883388 systemd[1]: run-netns-cni\x2d838077c0\x2dff8d\x2d1142\x2da7cb\x2d707757693d98.mount: Deactivated successfully. Nov 5 00:14:04.884466 systemd[1]: run-netns-cni\x2dd5b94499\x2d6265\x2da1b2\x2da9be\x2db784e656b02b.mount: Deactivated successfully. Nov 5 00:14:04.884600 systemd[1]: run-netns-cni\x2da4e557fd\x2d1411\x2da317\x2d5423\x2db0c2361d7cae.mount: Deactivated successfully. Nov 5 00:14:04.884738 systemd[1]: run-netns-cni\x2d41cf9623\x2dbc65\x2dc8f4\x2d50b6\x2d1b14559894f9.mount: Deactivated successfully. Nov 5 00:14:04.884871 systemd[1]: run-netns-cni\x2d14642c75\x2dd9a8\x2d166e\x2d3216\x2d826fd7110e92.mount: Deactivated successfully. Nov 5 00:14:13.442051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132581421.mount: Deactivated successfully. Nov 5 00:14:13.476702 containerd[1625]: time="2025-11-05T00:14:13.476501115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:14:13.478566 containerd[1625]: time="2025-11-05T00:14:13.478535218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 00:14:13.479396 containerd[1625]: time="2025-11-05T00:14:13.479326015Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:14:13.484119 containerd[1625]: time="2025-11-05T00:14:13.484067072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:14:13.485018 containerd[1625]: time="2025-11-05T00:14:13.484972006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.922770009s" Nov 5 00:14:13.485079 containerd[1625]: time="2025-11-05T00:14:13.485030560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 00:14:13.528454 containerd[1625]: time="2025-11-05T00:14:13.528346786Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 00:14:13.548567 containerd[1625]: time="2025-11-05T00:14:13.548506753Z" level=info msg="Container 1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:14:13.566573 containerd[1625]: time="2025-11-05T00:14:13.566508640Z" level=info msg="CreateContainer within sandbox \"04baba097131aa94ffa5c2a8a72a1659ed8776318f729464da6ab233ac65c88d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\"" Nov 5 00:14:13.568276 containerd[1625]: time="2025-11-05T00:14:13.567799208Z" level=info msg="StartContainer for \"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\"" Nov 5 00:14:13.572412 containerd[1625]: time="2025-11-05T00:14:13.572373074Z" level=info msg="connecting to shim 1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328" address="unix:///run/containerd/s/32e2f0e88903c3601cdc29ee42df16bca6d67912a46f3fbf373ac1712698205a" protocol=ttrpc version=3 Nov 5 00:14:13.716523 systemd[1]: Started cri-containerd-1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328.scope - libcontainer container 1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328. Nov 5 00:14:13.824617 containerd[1625]: time="2025-11-05T00:14:13.824538042Z" level=info msg="StartContainer for \"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" returns successfully" Nov 5 00:14:14.065772 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 00:14:14.066108 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 00:14:14.432132 kubelet[2883]: I1105 00:14:14.431993 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-backend-key-pair\") pod \"395087e7-e090-49ab-8705-bb6b55aa5776\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " Nov 5 00:14:14.437783 kubelet[2883]: I1105 00:14:14.432304 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-ca-bundle\") pod \"395087e7-e090-49ab-8705-bb6b55aa5776\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " Nov 5 00:14:14.437783 kubelet[2883]: I1105 00:14:14.432359 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrxll\" (UniqueName: \"kubernetes.io/projected/395087e7-e090-49ab-8705-bb6b55aa5776-kube-api-access-hrxll\") pod \"395087e7-e090-49ab-8705-bb6b55aa5776\" (UID: \"395087e7-e090-49ab-8705-bb6b55aa5776\") " Nov 5 00:14:14.451106 kubelet[2883]: I1105 00:14:14.451027 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "395087e7-e090-49ab-8705-bb6b55aa5776" (UID: "395087e7-e090-49ab-8705-bb6b55aa5776"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 00:14:14.451825 systemd[1]: var-lib-kubelet-pods-395087e7\x2de090\x2d49ab\x2d8705\x2dbb6b55aa5776-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrxll.mount: Deactivated successfully. Nov 5 00:14:14.454813 kubelet[2883]: I1105 00:14:14.454778 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395087e7-e090-49ab-8705-bb6b55aa5776-kube-api-access-hrxll" (OuterVolumeSpecName: "kube-api-access-hrxll") pod "395087e7-e090-49ab-8705-bb6b55aa5776" (UID: "395087e7-e090-49ab-8705-bb6b55aa5776"). InnerVolumeSpecName "kube-api-access-hrxll". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:14:14.459063 systemd[1]: var-lib-kubelet-pods-395087e7\x2de090\x2d49ab\x2d8705\x2dbb6b55aa5776-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 00:14:14.461424 kubelet[2883]: I1105 00:14:14.460324 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "395087e7-e090-49ab-8705-bb6b55aa5776" (UID: "395087e7-e090-49ab-8705-bb6b55aa5776"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 00:14:14.539020 kubelet[2883]: I1105 00:14:14.538930 2883 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-backend-key-pair\") on node \"172-232-14-37\" DevicePath \"\"" Nov 5 00:14:14.539302 kubelet[2883]: I1105 00:14:14.539026 2883 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/395087e7-e090-49ab-8705-bb6b55aa5776-whisker-ca-bundle\") on node \"172-232-14-37\" DevicePath \"\"" Nov 5 00:14:14.539302 kubelet[2883]: I1105 00:14:14.539097 2883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrxll\" (UniqueName: \"kubernetes.io/projected/395087e7-e090-49ab-8705-bb6b55aa5776-kube-api-access-hrxll\") on node \"172-232-14-37\" DevicePath \"\"" Nov 5 00:14:14.659278 kubelet[2883]: E1105 00:14:14.657945 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:14.675518 systemd[1]: Removed slice kubepods-besteffort-pod395087e7_e090_49ab_8705_bb6b55aa5776.slice - libcontainer container kubepods-besteffort-pod395087e7_e090_49ab_8705_bb6b55aa5776.slice. Nov 5 00:14:14.705554 kubelet[2883]: I1105 00:14:14.704216 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w8l76" podStartSLOduration=3.049084901 podStartE2EDuration="31.704026109s" podCreationTimestamp="2025-11-05 00:13:43 +0000 UTC" firstStartedPulling="2025-11-05 00:13:44.831841408 +0000 UTC m=+28.561184054" lastFinishedPulling="2025-11-05 00:14:13.486782606 +0000 UTC m=+57.216125262" observedRunningTime="2025-11-05 00:14:14.700514533 +0000 UTC m=+58.429857189" watchObservedRunningTime="2025-11-05 00:14:14.704026109 +0000 UTC m=+58.433368765" Nov 5 00:14:14.827741 systemd[1]: Created slice kubepods-besteffort-pod2078fd5e_a067_4d0f_9d6a_ce64f9873547.slice - libcontainer container kubepods-besteffort-pod2078fd5e_a067_4d0f_9d6a_ce64f9873547.slice. Nov 5 00:14:14.943280 kubelet[2883]: I1105 00:14:14.942820 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2078fd5e-a067-4d0f-9d6a-ce64f9873547-whisker-backend-key-pair\") pod \"whisker-6c555f98cb-r98t2\" (UID: \"2078fd5e-a067-4d0f-9d6a-ce64f9873547\") " pod="calico-system/whisker-6c555f98cb-r98t2" Nov 5 00:14:14.943280 kubelet[2883]: I1105 00:14:14.943001 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2g88\" (UniqueName: \"kubernetes.io/projected/2078fd5e-a067-4d0f-9d6a-ce64f9873547-kube-api-access-k2g88\") pod \"whisker-6c555f98cb-r98t2\" (UID: \"2078fd5e-a067-4d0f-9d6a-ce64f9873547\") " pod="calico-system/whisker-6c555f98cb-r98t2" Nov 5 00:14:14.943280 kubelet[2883]: I1105 00:14:14.943123 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078fd5e-a067-4d0f-9d6a-ce64f9873547-whisker-ca-bundle\") pod \"whisker-6c555f98cb-r98t2\" (UID: \"2078fd5e-a067-4d0f-9d6a-ce64f9873547\") " pod="calico-system/whisker-6c555f98cb-r98t2" Nov 5 00:14:15.138800 containerd[1625]: time="2025-11-05T00:14:15.138707070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c555f98cb-r98t2,Uid:2078fd5e-a067-4d0f-9d6a-ce64f9873547,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:15.459592 systemd-networkd[1524]: cali4039f9633f4: Link UP Nov 5 00:14:15.461867 systemd-networkd[1524]: cali4039f9633f4: Gained carrier Nov 5 00:14:15.510348 containerd[1625]: 2025-11-05 00:14:15.231 [INFO][3964] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:15.510348 containerd[1625]: 2025-11-05 00:14:15.289 [INFO][3964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0 whisker-6c555f98cb- calico-system 2078fd5e-a067-4d0f-9d6a-ce64f9873547 1002 0 2025-11-05 00:14:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c555f98cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-14-37 whisker-6c555f98cb-r98t2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4039f9633f4 [] [] }} ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-" Nov 5 00:14:15.510348 containerd[1625]: 2025-11-05 00:14:15.289 [INFO][3964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.510348 containerd[1625]: 2025-11-05 00:14:15.355 [INFO][3976] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" HandleID="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Workload="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.356 [INFO][3976] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" HandleID="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Workload="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-14-37", "pod":"whisker-6c555f98cb-r98t2", "timestamp":"2025-11-05 00:14:15.355757303 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.356 [INFO][3976] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.357 [INFO][3976] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.357 [INFO][3976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.369 [INFO][3976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" host="172-232-14-37" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.378 [INFO][3976] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.384 [INFO][3976] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.386 [INFO][3976] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.390 [INFO][3976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:15.510717 containerd[1625]: 2025-11-05 00:14:15.390 [INFO][3976] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" host="172-232-14-37" Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.392 [INFO][3976] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6 Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.398 [INFO][3976] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" host="172-232-14-37" Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.416 [INFO][3976] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.129/26] block=192.168.117.128/26 handle="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" host="172-232-14-37" Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.416 [INFO][3976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.129/26] handle="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" host="172-232-14-37" Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.416 [INFO][3976] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:15.511180 containerd[1625]: 2025-11-05 00:14:15.416 [INFO][3976] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.129/26] IPv6=[] ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" HandleID="k8s-pod-network.63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Workload="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.511483 containerd[1625]: 2025-11-05 00:14:15.421 [INFO][3964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0", GenerateName:"whisker-6c555f98cb-", Namespace:"calico-system", SelfLink:"", UID:"2078fd5e-a067-4d0f-9d6a-ce64f9873547", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c555f98cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"whisker-6c555f98cb-r98t2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4039f9633f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:15.511483 containerd[1625]: 2025-11-05 00:14:15.421 [INFO][3964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.129/32] ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.511636 containerd[1625]: 2025-11-05 00:14:15.421 [INFO][3964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4039f9633f4 ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.511636 containerd[1625]: 2025-11-05 00:14:15.453 [INFO][3964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.511752 containerd[1625]: 2025-11-05 00:14:15.465 [INFO][3964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0", GenerateName:"whisker-6c555f98cb-", Namespace:"calico-system", SelfLink:"", UID:"2078fd5e-a067-4d0f-9d6a-ce64f9873547", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c555f98cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6", Pod:"whisker-6c555f98cb-r98t2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4039f9633f4", MAC:"6a:3a:44:40:0d:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:15.511839 containerd[1625]: 2025-11-05 00:14:15.490 [INFO][3964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" Namespace="calico-system" Pod="whisker-6c555f98cb-r98t2" WorkloadEndpoint="172--232--14--37-k8s-whisker--6c555f98cb--r98t2-eth0" Nov 5 00:14:15.614276 containerd[1625]: time="2025-11-05T00:14:15.611773322Z" level=info msg="connecting to shim 63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6" address="unix:///run/containerd/s/53a39eb024fc4394bcce8f250113428e4fb15a791cf6813da1d0e82fbfaaf884" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:15.679514 systemd[1]: Started cri-containerd-63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6.scope - libcontainer container 63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6. Nov 5 00:14:15.693009 kubelet[2883]: E1105 00:14:15.692953 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:15.693881 containerd[1625]: time="2025-11-05T00:14:15.693322858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-9tw6k,Uid:78cc6732-2ab7-4966-83c9-5b3b3e112a51,Namespace:calico-apiserver,Attempt:0,}" Nov 5 00:14:15.695927 kubelet[2883]: E1105 00:14:15.695821 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:15.697911 containerd[1625]: time="2025-11-05T00:14:15.697540799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bstvr,Uid:0ceef31c-66d4-4e84-87fa-b07ef462b872,Namespace:kube-system,Attempt:0,}" Nov 5 00:14:15.697911 containerd[1625]: time="2025-11-05T00:14:15.697782312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9d74h,Uid:39f0296c-eb04-4fb2-8eac-b5af134b840e,Namespace:kube-system,Attempt:0,}" Nov 5 00:14:16.665786 systemd-networkd[1524]: cali4039f9633f4: Gained IPv6LL Nov 5 00:14:16.707531 containerd[1625]: time="2025-11-05T00:14:16.707279006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f7c64f9-pjzbv,Uid:dca6511f-77a2-4cca-9f19-2aca1b8d75e8,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:16.710170 containerd[1625]: time="2025-11-05T00:14:16.710133164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kf7jk,Uid:6ee5090e-a223-462e-845a-5c7f9446afa1,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:16.850537 systemd-networkd[1524]: cali94230159f13: Link UP Nov 5 00:14:16.856825 systemd-networkd[1524]: cali94230159f13: Gained carrier Nov 5 00:14:16.886305 kubelet[2883]: I1105 00:14:16.885674 2883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395087e7-e090-49ab-8705-bb6b55aa5776" path="/var/lib/kubelet/pods/395087e7-e090-49ab-8705-bb6b55aa5776/volumes" Nov 5 00:14:16.997540 containerd[1625]: 2025-11-05 00:14:16.026 [INFO][4033] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:16.997540 containerd[1625]: 2025-11-05 00:14:16.138 [INFO][4033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0 calico-apiserver-5b875bb7d7- calico-apiserver 78cc6732-2ab7-4966-83c9-5b3b3e112a51 923 0 2025-11-05 00:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b875bb7d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-14-37 calico-apiserver-5b875bb7d7-9tw6k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94230159f13 [] [] }} ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-" Nov 5 00:14:16.997540 containerd[1625]: 2025-11-05 00:14:16.138 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:16.997540 containerd[1625]: 2025-11-05 00:14:16.525 [INFO][4085] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" HandleID="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.557 [INFO][4085] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" HandleID="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-14-37", "pod":"calico-apiserver-5b875bb7d7-9tw6k", "timestamp":"2025-11-05 00:14:16.525219235 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.557 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.557 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.558 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.599 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" host="172-232-14-37" Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.618 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.630 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.635 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.000579 containerd[1625]: 2025-11-05 00:14:16.640 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.662 [INFO][4085] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" host="172-232-14-37" Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.672 [INFO][4085] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80 Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.682 [INFO][4085] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" host="172-232-14-37" Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.711 [INFO][4085] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.130/26] block=192.168.117.128/26 handle="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" host="172-232-14-37" Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.711 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.130/26] handle="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" host="172-232-14-37" Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.711 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:17.001054 containerd[1625]: 2025-11-05 00:14:16.712 [INFO][4085] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.130/26] IPv6=[] ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" HandleID="k8s-pod-network.0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.002086 containerd[1625]: 2025-11-05 00:14:16.807 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0", GenerateName:"calico-apiserver-5b875bb7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"78cc6732-2ab7-4966-83c9-5b3b3e112a51", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b875bb7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"calico-apiserver-5b875bb7d7-9tw6k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94230159f13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.002168 containerd[1625]: 2025-11-05 00:14:16.808 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.130/32] ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.002168 containerd[1625]: 2025-11-05 00:14:16.808 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94230159f13 ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.002168 containerd[1625]: 2025-11-05 00:14:16.852 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.002364 containerd[1625]: 2025-11-05 00:14:16.871 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0", GenerateName:"calico-apiserver-5b875bb7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"78cc6732-2ab7-4966-83c9-5b3b3e112a51", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b875bb7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80", Pod:"calico-apiserver-5b875bb7d7-9tw6k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94230159f13", MAC:"92:67:e2:fa:9d:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.002450 containerd[1625]: 2025-11-05 00:14:16.977 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-9tw6k" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--9tw6k-eth0" Nov 5 00:14:17.067795 containerd[1625]: time="2025-11-05T00:14:17.067156660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c555f98cb-r98t2,Uid:2078fd5e-a067-4d0f-9d6a-ce64f9873547,Namespace:calico-system,Attempt:0,} returns sandbox id \"63aaf86bff947b80edb41e052206ecbc294d81b14f70a36ca7a1e932bdb5e2c6\"" Nov 5 00:14:17.082490 containerd[1625]: time="2025-11-05T00:14:17.082452406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 00:14:17.172382 systemd-networkd[1524]: calie62cbdc697b: Link UP Nov 5 00:14:17.174022 systemd-networkd[1524]: calie62cbdc697b: Gained carrier Nov 5 00:14:17.273473 containerd[1625]: 2025-11-05 00:14:16.028 [INFO][4045] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:17.273473 containerd[1625]: 2025-11-05 00:14:16.140 [INFO][4045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0 coredns-674b8bbfcf- kube-system 0ceef31c-66d4-4e84-87fa-b07ef462b872 915 0 2025-11-05 00:13:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-14-37 coredns-674b8bbfcf-bstvr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie62cbdc697b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-" Nov 5 00:14:17.273473 containerd[1625]: 2025-11-05 00:14:16.140 [INFO][4045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.273473 containerd[1625]: 2025-11-05 00:14:16.629 [INFO][4082] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" HandleID="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.643 [INFO][4082] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" HandleID="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4980), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-14-37", "pod":"coredns-674b8bbfcf-bstvr", "timestamp":"2025-11-05 00:14:16.629153317 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.651 [INFO][4082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.711 [INFO][4082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.711 [INFO][4082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.836 [INFO][4082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" host="172-232-14-37" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.923 [INFO][4082] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.991 [INFO][4082] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:16.996 [INFO][4082] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:17.004 [INFO][4082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.274023 containerd[1625]: 2025-11-05 00:14:17.005 [INFO][4082] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" host="172-232-14-37" Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.009 [INFO][4082] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9 Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.022 [INFO][4082] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" host="172-232-14-37" Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.037 [INFO][4082] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.131/26] block=192.168.117.128/26 handle="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" host="172-232-14-37" Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.038 [INFO][4082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.131/26] handle="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" host="172-232-14-37" Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.039 [INFO][4082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:17.275262 containerd[1625]: 2025-11-05 00:14:17.040 [INFO][4082] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.131/26] IPv6=[] ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" HandleID="k8s-pod-network.95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.068 [INFO][4045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0ceef31c-66d4-4e84-87fa-b07ef462b872", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"coredns-674b8bbfcf-bstvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie62cbdc697b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.097 [INFO][4045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.131/32] ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.110 [INFO][4045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie62cbdc697b ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.175 [INFO][4045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.179 [INFO][4045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0ceef31c-66d4-4e84-87fa-b07ef462b872", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9", Pod:"coredns-674b8bbfcf-bstvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie62cbdc697b", MAC:"06:04:9e:79:a1:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.275861 containerd[1625]: 2025-11-05 00:14:17.249 [INFO][4045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-bstvr" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--bstvr-eth0" Nov 5 00:14:17.317870 systemd-networkd[1524]: calibf59fb6c349: Link UP Nov 5 00:14:17.321901 containerd[1625]: time="2025-11-05T00:14:17.320624590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:17.322714 containerd[1625]: time="2025-11-05T00:14:17.322575205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 00:14:17.329260 containerd[1625]: time="2025-11-05T00:14:17.324976275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 00:14:17.329405 kubelet[2883]: E1105 00:14:17.327575 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:17.329405 kubelet[2883]: E1105 00:14:17.328097 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:17.327411 systemd-networkd[1524]: calibf59fb6c349: Gained carrier Nov 5 00:14:17.351259 kubelet[2883]: E1105 00:14:17.351078 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1218690f0e2c4b6f966178251092a713,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:17.358170 containerd[1625]: time="2025-11-05T00:14:17.358134706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.054 [INFO][4041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.118 [INFO][4041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0 coredns-674b8bbfcf- kube-system 39f0296c-eb04-4fb2-8eac-b5af134b840e 922 0 2025-11-05 00:13:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-14-37 coredns-674b8bbfcf-9d74h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf59fb6c349 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.118 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.914 [INFO][4080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" HandleID="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.966 [INFO][4080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" HandleID="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000207280), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-14-37", "pod":"coredns-674b8bbfcf-9d74h", "timestamp":"2025-11-05 00:14:16.914875784 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:16.969 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.040 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.041 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.113 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.215 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.231 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.235 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.238 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.238 [INFO][4080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.243 [INFO][4080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.265 [INFO][4080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.278 [INFO][4080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.132/26] block=192.168.117.128/26 handle="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.279 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.132/26] handle="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" host="172-232-14-37" Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.279 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:17.405883 containerd[1625]: 2025-11-05 00:14:17.279 [INFO][4080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.132/26] IPv6=[] ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" HandleID="k8s-pod-network.e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Workload="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.295 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39f0296c-eb04-4fb2-8eac-b5af134b840e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"coredns-674b8bbfcf-9d74h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf59fb6c349", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.295 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.132/32] ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.295 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf59fb6c349 ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.350 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.356 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39f0296c-eb04-4fb2-8eac-b5af134b840e", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf", Pod:"coredns-674b8bbfcf-9d74h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf59fb6c349", MAC:"fa:4b:19:70:98:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:17.407905 containerd[1625]: 2025-11-05 00:14:17.391 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" Namespace="kube-system" Pod="coredns-674b8bbfcf-9d74h" WorkloadEndpoint="172--232--14--37-k8s-coredns--674b8bbfcf--9d74h-eth0" Nov 5 00:14:17.409115 containerd[1625]: time="2025-11-05T00:14:17.406016372Z" level=info msg="connecting to shim 0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80" address="unix:///run/containerd/s/e327bff902ea949c6879dc1c2971863734fc000b9f889ae5f4c0f66e1bf317f4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:17.520809 containerd[1625]: time="2025-11-05T00:14:17.520710536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:17.523026 containerd[1625]: time="2025-11-05T00:14:17.522785458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 00:14:17.524623 containerd[1625]: time="2025-11-05T00:14:17.524340622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 00:14:17.524837 kubelet[2883]: E1105 00:14:17.524573 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:17.526051 kubelet[2883]: E1105 00:14:17.525885 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:17.530796 kubelet[2883]: E1105 00:14:17.530652 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:17.533198 kubelet[2883]: E1105 00:14:17.533078 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:17.711273 containerd[1625]: time="2025-11-05T00:14:17.710929270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhc65,Uid:14de6d4c-7243-4b75-9a89-9c47bcb946c9,Namespace:calico-system,Attempt:0,}" Nov 5 00:14:17.786849 kubelet[2883]: E1105 00:14:17.785191 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:17.804390 containerd[1625]: time="2025-11-05T00:14:17.804288773Z" level=info msg="connecting to shim 95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9" address="unix:///run/containerd/s/5b7c1d3c87e6cd4810e5613572231bf236d8795cf5b3347b95070aad3b239a52" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:17.927609 containerd[1625]: time="2025-11-05T00:14:17.927510668Z" level=info msg="connecting to shim e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf" address="unix:///run/containerd/s/5406d787177f61ce076831175a0ae51e79b16f50894b25b75d130cc555c680de" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:17.931838 systemd[1]: Started cri-containerd-0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80.scope - libcontainer container 0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80. Nov 5 00:14:18.058559 systemd-networkd[1524]: cali94230159f13: Gained IPv6LL Nov 5 00:14:18.088525 systemd[1]: Started cri-containerd-95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9.scope - libcontainer container 95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9. Nov 5 00:14:18.130675 systemd[1]: Started cri-containerd-e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf.scope - libcontainer container e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf. Nov 5 00:14:18.218563 systemd-networkd[1524]: cali586e6ffc74f: Link UP Nov 5 00:14:18.222373 systemd-networkd[1524]: cali586e6ffc74f: Gained carrier Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.311 [INFO][4139] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.411 [INFO][4139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0 goldmane-666569f655- calico-system 6ee5090e-a223-462e-845a-5c7f9446afa1 919 0 2025-11-05 00:13:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-14-37 goldmane-666569f655-kf7jk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali586e6ffc74f [] [] }} ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.412 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.914 [INFO][4235] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" HandleID="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Workload="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.917 [INFO][4235] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" HandleID="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Workload="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000394200), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-14-37", "pod":"goldmane-666569f655-kf7jk", "timestamp":"2025-11-05 00:14:17.914639543 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.920 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.920 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.920 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:17.992 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.016 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.055 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.100 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.108 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.108 [INFO][4235] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.116 [INFO][4235] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3 Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.157 [INFO][4235] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.182 [INFO][4235] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.133/26] block=192.168.117.128/26 handle="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.183 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.133/26] handle="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" host="172-232-14-37" Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.183 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:18.316394 containerd[1625]: 2025-11-05 00:14:18.184 [INFO][4235] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.133/26] IPv6=[] ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" HandleID="k8s-pod-network.e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Workload="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.205 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6ee5090e-a223-462e-845a-5c7f9446afa1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"goldmane-666569f655-kf7jk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali586e6ffc74f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.206 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.133/32] ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.206 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali586e6ffc74f ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.224 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.234 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6ee5090e-a223-462e-845a-5c7f9446afa1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3", Pod:"goldmane-666569f655-kf7jk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali586e6ffc74f", MAC:"8a:26:74:72:fb:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.320718 containerd[1625]: 2025-11-05 00:14:18.305 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" Namespace="calico-system" Pod="goldmane-666569f655-kf7jk" WorkloadEndpoint="172--232--14--37-k8s-goldmane--666569f655--kf7jk-eth0" Nov 5 00:14:18.411298 systemd-networkd[1524]: calic1cf2a271f1: Link UP Nov 5 00:14:18.419251 systemd-networkd[1524]: calic1cf2a271f1: Gained carrier Nov 5 00:14:18.421374 containerd[1625]: time="2025-11-05T00:14:18.420411902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bstvr,Uid:0ceef31c-66d4-4e84-87fa-b07ef462b872,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9\"" Nov 5 00:14:18.426009 kubelet[2883]: E1105 00:14:18.425495 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:18.440766 containerd[1625]: time="2025-11-05T00:14:18.440472546Z" level=info msg="CreateContainer within sandbox \"95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:14:18.496277 containerd[1625]: time="2025-11-05T00:14:18.495363240Z" level=info msg="Container 44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:14:18.517645 containerd[1625]: time="2025-11-05T00:14:18.517593008Z" level=info msg="connecting to shim e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3" address="unix:///run/containerd/s/e6d1442fc0f6c0c631f815454476f8ff0e739aeff5b03060b9e9dc4a1cee389b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:17.392 [INFO][4159] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:17.493 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0 calico-kube-controllers-666f7c64f9- calico-system dca6511f-77a2-4cca-9f19-2aca1b8d75e8 925 0 2025-11-05 00:13:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:666f7c64f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-14-37 calico-kube-controllers-666f7c64f9-pjzbv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1cf2a271f1 [] [] }} ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:17.496 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.038 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" HandleID="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Workload="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.052 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" HandleID="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Workload="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000368a80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-14-37", "pod":"calico-kube-controllers-666f7c64f9-pjzbv", "timestamp":"2025-11-05 00:14:18.038952791 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.052 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.183 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.183 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.201 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.242 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.273 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.280 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.296 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.297 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.307 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0 Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.324 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.346 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.134/26] block=192.168.117.128/26 handle="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.348 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.134/26] handle="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" host="172-232-14-37" Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.354 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:18.527927 containerd[1625]: 2025-11-05 00:14:18.354 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.134/26] IPv6=[] ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" HandleID="k8s-pod-network.32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Workload="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.379 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0", GenerateName:"calico-kube-controllers-666f7c64f9-", Namespace:"calico-system", SelfLink:"", UID:"dca6511f-77a2-4cca-9f19-2aca1b8d75e8", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f7c64f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"calico-kube-controllers-666f7c64f9-pjzbv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1cf2a271f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.393 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.134/32] ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.393 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1cf2a271f1 ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.420 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.426 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0", GenerateName:"calico-kube-controllers-666f7c64f9-", Namespace:"calico-system", SelfLink:"", UID:"dca6511f-77a2-4cca-9f19-2aca1b8d75e8", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"666f7c64f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0", Pod:"calico-kube-controllers-666f7c64f9-pjzbv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1cf2a271f1", MAC:"12:6c:14:33:30:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.530348 containerd[1625]: 2025-11-05 00:14:18.470 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" Namespace="calico-system" Pod="calico-kube-controllers-666f7c64f9-pjzbv" WorkloadEndpoint="172--232--14--37-k8s-calico--kube--controllers--666f7c64f9--pjzbv-eth0" Nov 5 00:14:18.562491 containerd[1625]: time="2025-11-05T00:14:18.562006252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9d74h,Uid:39f0296c-eb04-4fb2-8eac-b5af134b840e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf\"" Nov 5 00:14:18.564720 kubelet[2883]: E1105 00:14:18.564681 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:18.572118 containerd[1625]: time="2025-11-05T00:14:18.571776615Z" level=info msg="CreateContainer within sandbox \"95f5895db666eae63d4f2d8143e4fb19df6647fcbe9d85af41140e6b146543f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116\"" Nov 5 00:14:18.573260 containerd[1625]: time="2025-11-05T00:14:18.573097064Z" level=info msg="StartContainer for \"44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116\"" Nov 5 00:14:18.573601 containerd[1625]: time="2025-11-05T00:14:18.573574489Z" level=info msg="CreateContainer within sandbox \"e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:14:18.579681 containerd[1625]: time="2025-11-05T00:14:18.579385435Z" level=info msg="connecting to shim 44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116" address="unix:///run/containerd/s/5b7c1d3c87e6cd4810e5613572231bf236d8795cf5b3347b95070aad3b239a52" protocol=ttrpc version=3 Nov 5 00:14:18.602466 containerd[1625]: time="2025-11-05T00:14:18.601849435Z" level=info msg="Container e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:14:18.616009 containerd[1625]: time="2025-11-05T00:14:18.615971187Z" level=info msg="CreateContainer within sandbox \"e4e9ad001f0f5cf593cc477809d7598847d72e8105719ab259a76a196cc4a0bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f\"" Nov 5 00:14:18.617966 containerd[1625]: time="2025-11-05T00:14:18.617933680Z" level=info msg="StartContainer for \"e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f\"" Nov 5 00:14:18.620893 containerd[1625]: time="2025-11-05T00:14:18.620862004Z" level=info msg="connecting to shim e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f" address="unix:///run/containerd/s/5406d787177f61ce076831175a0ae51e79b16f50894b25b75d130cc555c680de" protocol=ttrpc version=3 Nov 5 00:14:18.642109 containerd[1625]: time="2025-11-05T00:14:18.642072088Z" level=info msg="connecting to shim 32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0" address="unix:///run/containerd/s/07be20e46d9bc180a0235a6597bc619725b68ee422d0a9a4511d46f4cb6f8865" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:18.721595 systemd[1]: Started cri-containerd-44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116.scope - libcontainer container 44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116. Nov 5 00:14:18.761131 systemd-networkd[1524]: calie62cbdc697b: Gained IPv6LL Nov 5 00:14:18.807498 systemd[1]: Started cri-containerd-e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f.scope - libcontainer container e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f. Nov 5 00:14:18.815420 systemd[1]: Started cri-containerd-e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3.scope - libcontainer container e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3. Nov 5 00:14:18.857084 kubelet[2883]: E1105 00:14:18.856891 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:18.875807 systemd-networkd[1524]: cali4f19ce0249b: Link UP Nov 5 00:14:18.882186 systemd-networkd[1524]: cali4f19ce0249b: Gained carrier Nov 5 00:14:18.889900 systemd-networkd[1524]: calibf59fb6c349: Gained IPv6LL Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.189 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.282 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-csi--node--driver--rhc65-eth0 csi-node-driver- calico-system 14de6d4c-7243-4b75-9a89-9c47bcb946c9 791 0 2025-11-05 00:13:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-14-37 csi-node-driver-rhc65 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4f19ce0249b [] [] }} ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.283 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.540 [INFO][4379] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" HandleID="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Workload="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.540 [INFO][4379] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" HandleID="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Workload="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f950), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-14-37", "pod":"csi-node-driver-rhc65", "timestamp":"2025-11-05 00:14:18.538690057 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.540 [INFO][4379] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.540 [INFO][4379] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.540 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.560 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.596 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.676 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.686 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.704 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.704 [INFO][4379] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.738 [INFO][4379] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70 Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.769 [INFO][4379] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.819 [INFO][4379] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.135/26] block=192.168.117.128/26 handle="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.819 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.135/26] handle="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" host="172-232-14-37" Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.820 [INFO][4379] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:18.947279 containerd[1625]: 2025-11-05 00:14:18.820 [INFO][4379] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.135/26] IPv6=[] ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" HandleID="k8s-pod-network.0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Workload="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.854 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-csi--node--driver--rhc65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14de6d4c-7243-4b75-9a89-9c47bcb946c9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"csi-node-driver-rhc65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f19ce0249b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.858 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.135/32] ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.858 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f19ce0249b ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.887 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.893 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-csi--node--driver--rhc65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14de6d4c-7243-4b75-9a89-9c47bcb946c9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70", Pod:"csi-node-driver-rhc65", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f19ce0249b", MAC:"2e:30:b0:cc:e2:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:18.952372 containerd[1625]: 2025-11-05 00:14:18.921 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" Namespace="calico-system" Pod="csi-node-driver-rhc65" WorkloadEndpoint="172--232--14--37-k8s-csi--node--driver--rhc65-eth0" Nov 5 00:14:18.960558 systemd[1]: Started cri-containerd-32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0.scope - libcontainer container 32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0. Nov 5 00:14:19.052523 containerd[1625]: time="2025-11-05T00:14:19.052419924Z" level=info msg="connecting to shim 0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70" address="unix:///run/containerd/s/4422319d44a21f662eaa2abc29e54e6f3eca6f7e97db3ab847d2b27d35571d16" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:19.082181 containerd[1625]: time="2025-11-05T00:14:19.082033788Z" level=info msg="StartContainer for \"e443c59d7bc137328bb963902da0ae9c821830a79ed0ae722d0e9fdf83fd621f\" returns successfully" Nov 5 00:14:19.095141 containerd[1625]: time="2025-11-05T00:14:19.095083365Z" level=info msg="StartContainer for \"44e2252499f8dd74f386fb43c22fe6afba3cd2d30b253edb38df447a89ac7116\" returns successfully" Nov 5 00:14:19.184047 systemd[1]: Started cri-containerd-0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70.scope - libcontainer container 0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70. Nov 5 00:14:19.418739 containerd[1625]: time="2025-11-05T00:14:19.418284314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-9tw6k,Uid:78cc6732-2ab7-4966-83c9-5b3b3e112a51,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a35fdd5f69c36ad7de358691ace68c16acba8d0ab1dd5a04e8b27772c464b80\"" Nov 5 00:14:19.425478 containerd[1625]: time="2025-11-05T00:14:19.425135115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:14:19.465539 systemd-networkd[1524]: cali586e6ffc74f: Gained IPv6LL Nov 5 00:14:19.561267 containerd[1625]: time="2025-11-05T00:14:19.560971597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:19.565149 containerd[1625]: time="2025-11-05T00:14:19.564977192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:14:19.565149 containerd[1625]: time="2025-11-05T00:14:19.565116119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:19.565667 kubelet[2883]: E1105 00:14:19.565489 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:19.566909 kubelet[2883]: E1105 00:14:19.566415 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:19.567571 kubelet[2883]: E1105 00:14:19.567201 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbcnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:19.568416 kubelet[2883]: E1105 00:14:19.568361 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:19.593468 systemd-networkd[1524]: calic1cf2a271f1: Gained IPv6LL Nov 5 00:14:19.683644 containerd[1625]: time="2025-11-05T00:14:19.678704395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rhc65,Uid:14de6d4c-7243-4b75-9a89-9c47bcb946c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dd2e15cc463a15c1dd7acdf8827afc0a8155f32a699c286b393869f96cd2f70\"" Nov 5 00:14:19.705835 containerd[1625]: time="2025-11-05T00:14:19.705725526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-78sgr,Uid:346c021e-f948-4f90-b480-e046118d7005,Namespace:calico-apiserver,Attempt:0,}" Nov 5 00:14:19.725365 containerd[1625]: time="2025-11-05T00:14:19.724024062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 00:14:19.731131 containerd[1625]: time="2025-11-05T00:14:19.731078932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kf7jk,Uid:6ee5090e-a223-462e-845a-5c7f9446afa1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e697925504af6bed6ae0e1f10f0271f4aad1fdef50de37b3be4286d7be44ffb3\"" Nov 5 00:14:19.786687 containerd[1625]: time="2025-11-05T00:14:19.786646782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-666f7c64f9-pjzbv,Uid:dca6511f-77a2-4cca-9f19-2aca1b8d75e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"32fb2275e227760bcd2a778de071750f84d1c86c7fc924e69c4c408f2c1562c0\"" Nov 5 00:14:19.890098 kubelet[2883]: E1105 00:14:19.890040 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:19.892109 containerd[1625]: time="2025-11-05T00:14:19.891818648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:19.896371 containerd[1625]: time="2025-11-05T00:14:19.896283716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 00:14:19.896476 containerd[1625]: time="2025-11-05T00:14:19.896447674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 00:14:19.897522 kubelet[2883]: E1105 00:14:19.897468 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:14:19.897522 kubelet[2883]: E1105 00:14:19.897518 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:14:19.900029 kubelet[2883]: E1105 00:14:19.899834 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:19.905461 containerd[1625]: time="2025-11-05T00:14:19.905395452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 00:14:19.931596 kubelet[2883]: E1105 00:14:19.930900 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:19.944985 kubelet[2883]: E1105 00:14:19.944925 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:19.947800 kubelet[2883]: I1105 00:14:19.947616 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bstvr" podStartSLOduration=57.946807318 podStartE2EDuration="57.946807318s" podCreationTimestamp="2025-11-05 00:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:14:19.935153113 +0000 UTC m=+63.664495769" watchObservedRunningTime="2025-11-05 00:14:19.946807318 +0000 UTC m=+63.676149974" Nov 5 00:14:20.072588 containerd[1625]: time="2025-11-05T00:14:20.072501254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:20.074369 containerd[1625]: time="2025-11-05T00:14:20.074311694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 00:14:20.074470 containerd[1625]: time="2025-11-05T00:14:20.074418580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:20.075027 kubelet[2883]: E1105 00:14:20.074922 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:20.075613 kubelet[2883]: E1105 00:14:20.075099 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:20.076464 kubelet[2883]: E1105 00:14:20.076220 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5phnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:20.078175 containerd[1625]: time="2025-11-05T00:14:20.077920414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 00:14:20.078902 kubelet[2883]: E1105 00:14:20.078861 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:14:20.201263 systemd-networkd[1524]: caliece4651e67f: Link UP Nov 5 00:14:20.203722 systemd-networkd[1524]: caliece4651e67f: Gained carrier Nov 5 00:14:20.224408 kubelet[2883]: I1105 00:14:20.223712 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9d74h" podStartSLOduration=58.223682003 podStartE2EDuration="58.223682003s" podCreationTimestamp="2025-11-05 00:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:14:20.10499231 +0000 UTC m=+63.834334966" watchObservedRunningTime="2025-11-05 00:14:20.223682003 +0000 UTC m=+63.953024649" Nov 5 00:14:20.226543 containerd[1625]: time="2025-11-05T00:14:20.225924814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:20.230030 containerd[1625]: time="2025-11-05T00:14:20.229533444Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:19.826 [INFO][4629] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:19.849 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0 calico-apiserver-5b875bb7d7- calico-apiserver 346c021e-f948-4f90-b480-e046118d7005 926 0 2025-11-05 00:13:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b875bb7d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-14-37 calico-apiserver-5b875bb7d7-78sgr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliece4651e67f [] [] }} ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:19.851 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.023 [INFO][4645] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" HandleID="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.024 [INFO][4645] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" HandleID="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cc30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-14-37", "pod":"calico-apiserver-5b875bb7d7-78sgr", "timestamp":"2025-11-05 00:14:20.023472716 +0000 UTC"}, Hostname:"172-232-14-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.024 [INFO][4645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.025 [INFO][4645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.025 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-14-37' Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.058 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.111 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.124 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.129 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.137 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.138 [INFO][4645] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.146 [INFO][4645] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.163 [INFO][4645] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.187 [INFO][4645] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.136/26] block=192.168.117.128/26 handle="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.187 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.136/26] handle="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" host="172-232-14-37" Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.187 [INFO][4645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 00:14:20.231811 containerd[1625]: 2025-11-05 00:14:20.187 [INFO][4645] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.136/26] IPv6=[] ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" HandleID="k8s-pod-network.96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Workload="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.194 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0", GenerateName:"calico-apiserver-5b875bb7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"346c021e-f948-4f90-b480-e046118d7005", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b875bb7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"", Pod:"calico-apiserver-5b875bb7d7-78sgr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliece4651e67f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.195 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.136/32] ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.195 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliece4651e67f ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.204 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.205 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0", GenerateName:"calico-apiserver-5b875bb7d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"346c021e-f948-4f90-b480-e046118d7005", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b875bb7d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-14-37", ContainerID:"96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da", Pod:"calico-apiserver-5b875bb7d7-78sgr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliece4651e67f", MAC:"3a:6f:89:97:b6:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 00:14:20.236846 containerd[1625]: 2025-11-05 00:14:20.225 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" Namespace="calico-apiserver" Pod="calico-apiserver-5b875bb7d7-78sgr" WorkloadEndpoint="172--232--14--37-k8s-calico--apiserver--5b875bb7d7--78sgr-eth0" Nov 5 00:14:20.236846 containerd[1625]: time="2025-11-05T00:14:20.230740644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 00:14:20.236846 containerd[1625]: time="2025-11-05T00:14:20.235886900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 00:14:20.239312 kubelet[2883]: E1105 00:14:20.233719 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:14:20.239312 kubelet[2883]: E1105 00:14:20.233793 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:14:20.239312 kubelet[2883]: E1105 00:14:20.234069 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2899,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:20.239312 kubelet[2883]: E1105 00:14:20.237618 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:14:20.291258 containerd[1625]: time="2025-11-05T00:14:20.291140158Z" level=info msg="connecting to shim 96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da" address="unix:///run/containerd/s/15399a10010bfe9e8bccd85d0fbf8a3c2bc9ad719b5ef98716f8d9ee84903fe2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:14:20.358864 systemd[1]: Started cri-containerd-96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da.scope - libcontainer container 96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da. Nov 5 00:14:20.386900 containerd[1625]: time="2025-11-05T00:14:20.386473229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:20.388202 containerd[1625]: time="2025-11-05T00:14:20.387430776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 00:14:20.388399 containerd[1625]: time="2025-11-05T00:14:20.387605015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 00:14:20.388744 kubelet[2883]: E1105 00:14:20.388679 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:14:20.389359 kubelet[2883]: E1105 00:14:20.389330 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:14:20.391579 kubelet[2883]: E1105 00:14:20.391452 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:20.393302 kubelet[2883]: E1105 00:14:20.392766 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:20.538864 containerd[1625]: time="2025-11-05T00:14:20.538802524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b875bb7d7-78sgr,Uid:346c021e-f948-4f90-b480-e046118d7005,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96ae7e51a8ec289f5f77b996d8a430958aa0585a1f1ac03c993630601de924da\"" Nov 5 00:14:20.546278 containerd[1625]: time="2025-11-05T00:14:20.546191681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:14:20.680918 systemd-networkd[1524]: cali4f19ce0249b: Gained IPv6LL Nov 5 00:14:20.702269 containerd[1625]: time="2025-11-05T00:14:20.701549288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:20.711137 containerd[1625]: time="2025-11-05T00:14:20.710260891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:14:20.711137 containerd[1625]: time="2025-11-05T00:14:20.710358396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:20.713495 kubelet[2883]: E1105 00:14:20.710789 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:20.713495 kubelet[2883]: E1105 00:14:20.713146 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:20.716768 kubelet[2883]: E1105 00:14:20.715877 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wd4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:20.717447 kubelet[2883]: E1105 00:14:20.717417 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:20.955023 kubelet[2883]: E1105 00:14:20.954827 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:20.957949 kubelet[2883]: E1105 00:14:20.955728 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:20.959494 kubelet[2883]: E1105 00:14:20.958462 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:20.959908 kubelet[2883]: E1105 00:14:20.959854 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:20.961737 kubelet[2883]: E1105 00:14:20.961293 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:14:20.962069 kubelet[2883]: E1105 00:14:20.961853 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:20.963067 kubelet[2883]: E1105 00:14:20.962888 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:14:21.287942 systemd-networkd[1524]: vxlan.calico: Link UP Nov 5 00:14:21.291864 systemd-networkd[1524]: vxlan.calico: Gained carrier Nov 5 00:14:21.953922 kubelet[2883]: E1105 00:14:21.953856 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:21.960182 kubelet[2883]: E1105 00:14:21.955758 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:21.960182 kubelet[2883]: E1105 00:14:21.954072 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:22.153675 systemd-networkd[1524]: caliece4651e67f: Gained IPv6LL Nov 5 00:14:22.968275 kubelet[2883]: E1105 00:14:22.968033 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:23.240466 systemd-networkd[1524]: vxlan.calico: Gained IPv6LL Nov 5 00:14:29.695072 containerd[1625]: time="2025-11-05T00:14:29.694783250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 00:14:29.841145 containerd[1625]: time="2025-11-05T00:14:29.841061507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:29.842866 containerd[1625]: time="2025-11-05T00:14:29.842838007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 00:14:29.843071 containerd[1625]: time="2025-11-05T00:14:29.842890839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 00:14:29.843551 kubelet[2883]: E1105 00:14:29.843424 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:29.844377 kubelet[2883]: E1105 00:14:29.843575 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:29.844377 kubelet[2883]: E1105 00:14:29.843985 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1218690f0e2c4b6f966178251092a713,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:29.846836 containerd[1625]: time="2025-11-05T00:14:29.846802622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 00:14:29.978370 containerd[1625]: time="2025-11-05T00:14:29.978063672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:29.980561 containerd[1625]: time="2025-11-05T00:14:29.980452545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 00:14:29.980740 containerd[1625]: time="2025-11-05T00:14:29.980607491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 00:14:29.981133 kubelet[2883]: E1105 00:14:29.980973 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:29.981432 kubelet[2883]: E1105 00:14:29.981076 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:29.982629 kubelet[2883]: E1105 00:14:29.982093 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:29.984696 kubelet[2883]: E1105 00:14:29.984579 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:31.692360 containerd[1625]: time="2025-11-05T00:14:31.691529091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:14:31.825570 containerd[1625]: time="2025-11-05T00:14:31.825476201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:31.826949 containerd[1625]: time="2025-11-05T00:14:31.826873553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:14:31.826949 containerd[1625]: time="2025-11-05T00:14:31.826919045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:31.829300 kubelet[2883]: E1105 00:14:31.827297 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:31.829300 kubelet[2883]: E1105 00:14:31.827358 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:31.829300 kubelet[2883]: E1105 00:14:31.827650 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbcnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:31.830185 kubelet[2883]: E1105 00:14:31.829308 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:32.696035 containerd[1625]: time="2025-11-05T00:14:32.695693696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 00:14:32.836673 containerd[1625]: time="2025-11-05T00:14:32.836587271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:32.837592 containerd[1625]: time="2025-11-05T00:14:32.837542876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 00:14:32.837694 containerd[1625]: time="2025-11-05T00:14:32.837637049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:32.837879 kubelet[2883]: E1105 00:14:32.837813 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:32.838675 kubelet[2883]: E1105 00:14:32.837877 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:32.838675 kubelet[2883]: E1105 00:14:32.838316 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5phnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:32.839852 kubelet[2883]: E1105 00:14:32.839556 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:14:33.691471 containerd[1625]: time="2025-11-05T00:14:33.691373291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:14:33.839583 containerd[1625]: time="2025-11-05T00:14:33.839518073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:33.841311 containerd[1625]: time="2025-11-05T00:14:33.840979865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:14:33.841311 containerd[1625]: time="2025-11-05T00:14:33.840977785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:33.841569 kubelet[2883]: E1105 00:14:33.841487 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:33.842089 kubelet[2883]: E1105 00:14:33.841541 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:33.843173 kubelet[2883]: E1105 00:14:33.842904 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wd4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:33.845278 kubelet[2883]: E1105 00:14:33.844464 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:34.703641 containerd[1625]: time="2025-11-05T00:14:34.703456922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 00:14:34.846692 containerd[1625]: time="2025-11-05T00:14:34.846602541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:34.848113 containerd[1625]: time="2025-11-05T00:14:34.848023609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 00:14:34.848628 containerd[1625]: time="2025-11-05T00:14:34.848044440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 00:14:34.849174 kubelet[2883]: E1105 00:14:34.849049 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:14:34.849773 kubelet[2883]: E1105 00:14:34.849287 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:14:34.849773 kubelet[2883]: E1105 00:14:34.849575 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:34.853349 containerd[1625]: time="2025-11-05T00:14:34.853307861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 00:14:34.989888 containerd[1625]: time="2025-11-05T00:14:34.989614625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:34.991257 containerd[1625]: time="2025-11-05T00:14:34.991063254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 00:14:34.991257 containerd[1625]: time="2025-11-05T00:14:34.991157048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 00:14:34.991539 kubelet[2883]: E1105 00:14:34.991485 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:14:34.991676 kubelet[2883]: E1105 00:14:34.991561 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:14:34.991834 kubelet[2883]: E1105 00:14:34.991745 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:34.993313 kubelet[2883]: E1105 00:14:34.993166 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:36.690592 kubelet[2883]: E1105 00:14:36.690482 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:36.694118 containerd[1625]: time="2025-11-05T00:14:36.694057424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 00:14:36.839812 containerd[1625]: time="2025-11-05T00:14:36.839674992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:36.841167 containerd[1625]: time="2025-11-05T00:14:36.841079978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 00:14:36.841895 containerd[1625]: time="2025-11-05T00:14:36.841213202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 00:14:36.841969 kubelet[2883]: E1105 00:14:36.841519 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:14:36.841969 kubelet[2883]: E1105 00:14:36.841584 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:14:36.841969 kubelet[2883]: E1105 00:14:36.841764 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2899,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:36.843695 kubelet[2883]: E1105 00:14:36.843632 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:14:40.695815 kubelet[2883]: E1105 00:14:40.695642 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:43.692298 kubelet[2883]: E1105 00:14:43.691639 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:14:44.657710 kubelet[2883]: E1105 00:14:44.657654 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:45.328641 containerd[1625]: time="2025-11-05T00:14:45.326885517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" id:\"ce6e47383416c2677eef06fdb9181d101aacd7daad091a1b8dfd0aa5a539316f\" pid:4865 exited_at:{seconds:1762301685 nanos:324185176}" Nov 5 00:14:45.340481 kubelet[2883]: E1105 00:14:45.340416 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:45.695093 kubelet[2883]: E1105 00:14:45.693535 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:45.954361 containerd[1625]: time="2025-11-05T00:14:45.953785815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" id:\"e43d0b6bb0343e675b4e69356d07cb721ba439f390d809ec16f56b46b864d526\" pid:4890 exited_at:{seconds:1762301685 nanos:950414846}" Nov 5 00:14:47.700891 kubelet[2883]: E1105 00:14:47.700343 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:14:48.691752 kubelet[2883]: E1105 00:14:48.690964 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:49.695150 kubelet[2883]: E1105 00:14:49.695011 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:14:52.696821 kubelet[2883]: E1105 00:14:52.696077 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:14:53.694760 kubelet[2883]: E1105 00:14:53.692435 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:53.695921 containerd[1625]: time="2025-11-05T00:14:53.695701176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 00:14:54.038710 containerd[1625]: time="2025-11-05T00:14:54.038192687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:54.039895 containerd[1625]: time="2025-11-05T00:14:54.039826272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 00:14:54.040296 containerd[1625]: time="2025-11-05T00:14:54.039846793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 00:14:54.041252 kubelet[2883]: E1105 00:14:54.041006 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:54.041252 kubelet[2883]: E1105 00:14:54.041159 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:14:54.043490 kubelet[2883]: E1105 00:14:54.043272 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1218690f0e2c4b6f966178251092a713,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:54.046578 containerd[1625]: time="2025-11-05T00:14:54.046497256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 00:14:54.192797 containerd[1625]: time="2025-11-05T00:14:54.192715743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:54.193842 containerd[1625]: time="2025-11-05T00:14:54.193754055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 00:14:54.194358 containerd[1625]: time="2025-11-05T00:14:54.193867777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 00:14:54.194732 kubelet[2883]: E1105 00:14:54.194603 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:54.195185 kubelet[2883]: E1105 00:14:54.195118 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:14:54.195670 kubelet[2883]: E1105 00:14:54.195507 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:54.198365 kubelet[2883]: E1105 00:14:54.197944 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:14:56.700656 kubelet[2883]: E1105 00:14:56.699161 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:14:57.695668 containerd[1625]: time="2025-11-05T00:14:57.695474046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:14:57.849043 containerd[1625]: time="2025-11-05T00:14:57.848751942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:57.850356 containerd[1625]: time="2025-11-05T00:14:57.850318663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:14:57.850602 containerd[1625]: time="2025-11-05T00:14:57.850428005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:57.850698 kubelet[2883]: E1105 00:14:57.850588 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:57.850698 kubelet[2883]: E1105 00:14:57.850661 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:14:57.851900 kubelet[2883]: E1105 00:14:57.850928 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbcnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:57.852559 kubelet[2883]: E1105 00:14:57.852038 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:14:58.701390 containerd[1625]: time="2025-11-05T00:14:58.701113423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 00:14:58.840796 containerd[1625]: time="2025-11-05T00:14:58.840592813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:14:58.843219 containerd[1625]: time="2025-11-05T00:14:58.843159094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 00:14:58.844758 containerd[1625]: time="2025-11-05T00:14:58.843372948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 00:14:58.845197 kubelet[2883]: E1105 00:14:58.844958 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:58.845840 kubelet[2883]: E1105 00:14:58.845688 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:14:58.849651 kubelet[2883]: E1105 00:14:58.849526 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5phnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 00:14:58.851416 kubelet[2883]: E1105 00:14:58.851297 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:15:00.698281 containerd[1625]: time="2025-11-05T00:15:00.698186643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:15:00.846506 containerd[1625]: time="2025-11-05T00:15:00.846422349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:00.847883 containerd[1625]: time="2025-11-05T00:15:00.847716354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:15:00.848334 containerd[1625]: time="2025-11-05T00:15:00.848189333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:15:00.848548 kubelet[2883]: E1105 00:15:00.848491 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:00.849487 kubelet[2883]: E1105 00:15:00.848561 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:00.849487 kubelet[2883]: E1105 00:15:00.848721 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wd4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:00.850411 kubelet[2883]: E1105 00:15:00.850316 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:15:01.693883 containerd[1625]: time="2025-11-05T00:15:01.693734799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 00:15:01.825175 containerd[1625]: time="2025-11-05T00:15:01.824691829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:01.828550 containerd[1625]: time="2025-11-05T00:15:01.827661084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 00:15:01.828550 containerd[1625]: time="2025-11-05T00:15:01.828402528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 00:15:01.829046 kubelet[2883]: E1105 00:15:01.829003 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:15:01.829180 kubelet[2883]: E1105 00:15:01.829158 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:15:01.830436 kubelet[2883]: E1105 00:15:01.829464 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:01.832308 containerd[1625]: time="2025-11-05T00:15:01.831979744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 00:15:01.974832 containerd[1625]: time="2025-11-05T00:15:01.974140482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:01.977910 containerd[1625]: time="2025-11-05T00:15:01.976763941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 00:15:01.977910 containerd[1625]: time="2025-11-05T00:15:01.976821912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 00:15:01.978121 kubelet[2883]: E1105 00:15:01.977745 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:15:01.978121 kubelet[2883]: E1105 00:15:01.977918 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:15:01.978976 kubelet[2883]: E1105 00:15:01.978701 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:01.980399 kubelet[2883]: E1105 00:15:01.980335 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:15:06.693282 containerd[1625]: time="2025-11-05T00:15:06.692359436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 00:15:06.837259 containerd[1625]: time="2025-11-05T00:15:06.837109074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:06.839002 containerd[1625]: time="2025-11-05T00:15:06.838815173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 00:15:06.839002 containerd[1625]: time="2025-11-05T00:15:06.838918804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 00:15:06.840841 kubelet[2883]: E1105 00:15:06.840637 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:15:06.842068 kubelet[2883]: E1105 00:15:06.841689 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:15:06.845258 kubelet[2883]: E1105 00:15:06.843395 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2899,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:06.845258 kubelet[2883]: E1105 00:15:06.844695 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:15:08.695802 kubelet[2883]: E1105 00:15:08.695684 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:15:10.703308 kubelet[2883]: E1105 00:15:10.702835 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:15:11.691913 kubelet[2883]: E1105 00:15:11.691858 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:15:13.690293 kubelet[2883]: E1105 00:15:13.690205 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:13.696460 kubelet[2883]: E1105 00:15:13.696105 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:15:14.694319 kubelet[2883]: E1105 00:15:14.694206 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:15:15.829163 containerd[1625]: time="2025-11-05T00:15:15.829093194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" id:\"406ba15062d5533604751b76a35a5272846fc817b25ac6a2da74c6dcfdc1fb09\" pid:4928 exited_at:{seconds:1762301715 nanos:827681064}" Nov 5 00:15:19.692021 kubelet[2883]: E1105 00:15:19.691800 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:15:21.694439 kubelet[2883]: E1105 00:15:21.693447 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:15:22.698012 kubelet[2883]: E1105 00:15:22.697881 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:15:22.702432 kubelet[2883]: E1105 00:15:22.699124 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:15:24.696636 kubelet[2883]: E1105 00:15:24.695433 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:24.697860 kubelet[2883]: E1105 00:15:24.697768 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:15:26.701349 kubelet[2883]: E1105 00:15:26.701056 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:15:34.697141 kubelet[2883]: E1105 00:15:34.696737 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:15:34.700566 kubelet[2883]: E1105 00:15:34.696694 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:15:35.694535 kubelet[2883]: E1105 00:15:35.694472 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:15:35.697174 containerd[1625]: time="2025-11-05T00:15:35.696492184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 00:15:35.846519 containerd[1625]: time="2025-11-05T00:15:35.846451013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:35.848309 containerd[1625]: time="2025-11-05T00:15:35.848271393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 00:15:35.848537 containerd[1625]: time="2025-11-05T00:15:35.848450555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 00:15:35.849250 kubelet[2883]: E1105 00:15:35.849106 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:15:35.849725 kubelet[2883]: E1105 00:15:35.849315 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 00:15:35.850258 kubelet[2883]: E1105 00:15:35.849909 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1218690f0e2c4b6f966178251092a713,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:35.853067 containerd[1625]: time="2025-11-05T00:15:35.853025815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 00:15:35.993343 containerd[1625]: time="2025-11-05T00:15:35.993151367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:35.995078 containerd[1625]: time="2025-11-05T00:15:35.995015907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 00:15:35.995213 containerd[1625]: time="2025-11-05T00:15:35.995089068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 00:15:35.995630 kubelet[2883]: E1105 00:15:35.995534 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:15:35.995707 kubelet[2883]: E1105 00:15:35.995673 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 00:15:35.997137 kubelet[2883]: E1105 00:15:35.997068 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2g88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c555f98cb-r98t2_calico-system(2078fd5e-a067-4d0f-9d6a-ce64f9873547): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:35.998796 kubelet[2883]: E1105 00:15:35.998725 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:15:38.716868 kubelet[2883]: E1105 00:15:38.715093 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:15:41.695364 kubelet[2883]: E1105 00:15:41.694616 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:41.696750 kubelet[2883]: E1105 00:15:41.696547 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:15:45.543776 containerd[1625]: time="2025-11-05T00:15:45.543002400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" id:\"2a7b8cb11f37683632dbf8e374fa59505c9c314dbaee43521604a1f9847bb9cf\" pid:4969 exited_at:{seconds:1762301745 nanos:538564236}" Nov 5 00:15:45.693522 containerd[1625]: time="2025-11-05T00:15:45.692518173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:15:45.834841 containerd[1625]: time="2025-11-05T00:15:45.834609553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:45.838254 containerd[1625]: time="2025-11-05T00:15:45.837591902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:15:45.838460 containerd[1625]: time="2025-11-05T00:15:45.838435610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:15:45.839203 kubelet[2883]: E1105 00:15:45.838835 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:45.839701 kubelet[2883]: E1105 00:15:45.839281 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:45.839785 kubelet[2883]: E1105 00:15:45.839706 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wd4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-78sgr_calico-apiserver(346c021e-f948-4f90-b480-e046118d7005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:45.841817 kubelet[2883]: E1105 00:15:45.841721 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:15:47.694292 containerd[1625]: time="2025-11-05T00:15:47.693985723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 00:15:47.838007 containerd[1625]: time="2025-11-05T00:15:47.837903495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:47.838912 containerd[1625]: time="2025-11-05T00:15:47.838866295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 00:15:47.839020 containerd[1625]: time="2025-11-05T00:15:47.838956616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 00:15:47.839356 kubelet[2883]: E1105 00:15:47.839186 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:47.840958 kubelet[2883]: E1105 00:15:47.839345 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 00:15:47.840958 kubelet[2883]: E1105 00:15:47.840385 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbcnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b875bb7d7-9tw6k_calico-apiserver(78cc6732-2ab7-4966-83c9-5b3b3e112a51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:47.841825 kubelet[2883]: E1105 00:15:47.841639 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:15:48.703245 kubelet[2883]: E1105 00:15:48.703090 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:15:49.694922 containerd[1625]: time="2025-11-05T00:15:49.694827025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 00:15:49.832049 containerd[1625]: time="2025-11-05T00:15:49.831709937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:49.833203 containerd[1625]: time="2025-11-05T00:15:49.833146741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 00:15:49.833416 containerd[1625]: time="2025-11-05T00:15:49.833282502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 00:15:49.834987 kubelet[2883]: E1105 00:15:49.834923 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:15:49.838980 kubelet[2883]: E1105 00:15:49.836405 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 00:15:49.838980 kubelet[2883]: E1105 00:15:49.837826 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2899,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-666f7c64f9-pjzbv_calico-system(dca6511f-77a2-4cca-9f19-2aca1b8d75e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:49.840171 kubelet[2883]: E1105 00:15:49.839982 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:15:50.613083 systemd[1]: Started sshd@9-172.232.14.37:22-139.178.68.195:43288.service - OpenSSH per-connection server daemon (139.178.68.195:43288). Nov 5 00:15:50.695168 kubelet[2883]: E1105 00:15:50.694931 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:50.698303 kubelet[2883]: E1105 00:15:50.698276 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:51.048630 sshd[4984]: Accepted publickey for core from 139.178.68.195 port 43288 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:15:51.052661 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:15:51.066064 systemd-logind[1595]: New session 10 of user core. Nov 5 00:15:51.074881 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 00:15:51.581960 sshd[4987]: Connection closed by 139.178.68.195 port 43288 Nov 5 00:15:51.583183 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Nov 5 00:15:51.594972 systemd[1]: sshd@9-172.232.14.37:22-139.178.68.195:43288.service: Deactivated successfully. Nov 5 00:15:51.602964 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 00:15:51.606182 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Nov 5 00:15:51.609796 systemd-logind[1595]: Removed session 10. Nov 5 00:15:53.693462 containerd[1625]: time="2025-11-05T00:15:53.693110095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 00:15:53.829552 containerd[1625]: time="2025-11-05T00:15:53.829092968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:53.832168 containerd[1625]: time="2025-11-05T00:15:53.831505890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 00:15:53.832168 containerd[1625]: time="2025-11-05T00:15:53.831624321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 00:15:53.832807 kubelet[2883]: E1105 00:15:53.832413 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:15:53.832807 kubelet[2883]: E1105 00:15:53.832469 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 00:15:53.835933 kubelet[2883]: E1105 00:15:53.835207 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5phnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kf7jk_calico-system(6ee5090e-a223-462e-845a-5c7f9446afa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:53.836711 kubelet[2883]: E1105 00:15:53.836592 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:15:54.702702 containerd[1625]: time="2025-11-05T00:15:54.702557970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 00:15:54.853727 containerd[1625]: time="2025-11-05T00:15:54.852920164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:54.854431 containerd[1625]: time="2025-11-05T00:15:54.854370447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 00:15:54.854532 containerd[1625]: time="2025-11-05T00:15:54.854505688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 00:15:54.854961 kubelet[2883]: E1105 00:15:54.854870 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:15:54.855582 kubelet[2883]: E1105 00:15:54.854975 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 00:15:54.855582 kubelet[2883]: E1105 00:15:54.855209 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:54.860529 containerd[1625]: time="2025-11-05T00:15:54.860497303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 00:15:54.994663 containerd[1625]: time="2025-11-05T00:15:54.993026114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 00:15:54.995827 containerd[1625]: time="2025-11-05T00:15:54.995641028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 00:15:54.996912 containerd[1625]: time="2025-11-05T00:15:54.996591747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 00:15:54.997487 kubelet[2883]: E1105 00:15:54.997165 2883 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:15:54.997602 kubelet[2883]: E1105 00:15:54.997493 2883 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 00:15:54.998268 kubelet[2883]: E1105 00:15:54.998037 2883 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rhc65_calico-system(14de6d4c-7243-4b75-9a89-9c47bcb946c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 00:15:54.999931 kubelet[2883]: E1105 00:15:54.999405 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:15:56.668704 systemd[1]: Started sshd@10-172.232.14.37:22-139.178.68.195:41166.service - OpenSSH per-connection server daemon (139.178.68.195:41166). Nov 5 00:15:57.085369 sshd[5002]: Accepted publickey for core from 139.178.68.195 port 41166 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:15:57.090257 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:15:57.114580 systemd-logind[1595]: New session 11 of user core. Nov 5 00:15:57.119470 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 00:15:57.474312 sshd[5005]: Connection closed by 139.178.68.195 port 41166 Nov 5 00:15:57.475932 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Nov 5 00:15:57.485734 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Nov 5 00:15:57.488898 systemd[1]: sshd@10-172.232.14.37:22-139.178.68.195:41166.service: Deactivated successfully. Nov 5 00:15:57.496625 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 00:15:57.499850 systemd-logind[1595]: Removed session 11. Nov 5 00:15:57.692946 kubelet[2883]: E1105 00:15:57.692842 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:15:59.693726 kubelet[2883]: E1105 00:15:59.692562 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:15:59.696170 kubelet[2883]: E1105 00:15:59.694839 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:16:01.690827 kubelet[2883]: E1105 00:16:01.690316 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:16:02.545933 systemd[1]: Started sshd@11-172.232.14.37:22-139.178.68.195:41182.service - OpenSSH per-connection server daemon (139.178.68.195:41182). Nov 5 00:16:02.922893 sshd[5042]: Accepted publickey for core from 139.178.68.195 port 41182 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:02.925007 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:02.935248 systemd-logind[1595]: New session 12 of user core. Nov 5 00:16:02.944487 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 00:16:03.328458 sshd[5045]: Connection closed by 139.178.68.195 port 41182 Nov 5 00:16:03.331519 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:03.338721 systemd[1]: sshd@11-172.232.14.37:22-139.178.68.195:41182.service: Deactivated successfully. Nov 5 00:16:03.342686 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 00:16:03.344835 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Nov 5 00:16:03.348790 systemd-logind[1595]: Removed session 12. Nov 5 00:16:03.394915 systemd[1]: Started sshd@12-172.232.14.37:22-139.178.68.195:45132.service - OpenSSH per-connection server daemon (139.178.68.195:45132). Nov 5 00:16:03.699404 kubelet[2883]: E1105 00:16:03.696464 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:16:03.761286 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 45132 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:03.762367 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:03.773524 systemd-logind[1595]: New session 13 of user core. Nov 5 00:16:03.778443 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 00:16:04.195144 sshd[5061]: Connection closed by 139.178.68.195 port 45132 Nov 5 00:16:04.198675 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:04.207495 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Nov 5 00:16:04.208196 systemd[1]: sshd@12-172.232.14.37:22-139.178.68.195:45132.service: Deactivated successfully. Nov 5 00:16:04.213791 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 00:16:04.220177 systemd-logind[1595]: Removed session 13. Nov 5 00:16:04.261502 systemd[1]: Started sshd@13-172.232.14.37:22-139.178.68.195:45136.service - OpenSSH per-connection server daemon (139.178.68.195:45136). Nov 5 00:16:04.620162 sshd[5071]: Accepted publickey for core from 139.178.68.195 port 45136 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:04.623974 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:04.635352 systemd-logind[1595]: New session 14 of user core. Nov 5 00:16:04.644216 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 00:16:04.693356 kubelet[2883]: E1105 00:16:04.693276 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:16:04.696213 kubelet[2883]: E1105 00:16:04.695381 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:16:05.018401 sshd[5074]: Connection closed by 139.178.68.195 port 45136 Nov 5 00:16:05.019619 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:05.026315 systemd[1]: sshd@13-172.232.14.37:22-139.178.68.195:45136.service: Deactivated successfully. Nov 5 00:16:05.034755 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 00:16:05.040609 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Nov 5 00:16:05.042980 systemd-logind[1595]: Removed session 14. Nov 5 00:16:05.694256 kubelet[2883]: E1105 00:16:05.694154 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:16:10.085518 systemd[1]: Started sshd@14-172.232.14.37:22-139.178.68.195:45144.service - OpenSSH per-connection server daemon (139.178.68.195:45144). Nov 5 00:16:10.437572 sshd[5086]: Accepted publickey for core from 139.178.68.195 port 45144 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:10.439884 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:10.448138 systemd-logind[1595]: New session 15 of user core. Nov 5 00:16:10.452524 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 00:16:10.798409 sshd[5089]: Connection closed by 139.178.68.195 port 45144 Nov 5 00:16:10.802085 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:10.811475 systemd[1]: sshd@14-172.232.14.37:22-139.178.68.195:45144.service: Deactivated successfully. Nov 5 00:16:10.816634 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 00:16:10.818692 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Nov 5 00:16:10.820073 systemd-logind[1595]: Removed session 15. Nov 5 00:16:10.867496 systemd[1]: Started sshd@15-172.232.14.37:22-139.178.68.195:45152.service - OpenSSH per-connection server daemon (139.178.68.195:45152). Nov 5 00:16:11.241801 sshd[5102]: Accepted publickey for core from 139.178.68.195 port 45152 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:11.244436 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:11.252372 systemd-logind[1595]: New session 16 of user core. Nov 5 00:16:11.259574 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 00:16:11.692824 kubelet[2883]: E1105 00:16:11.692689 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:16:11.699444 kubelet[2883]: E1105 00:16:11.698865 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:16:11.838293 sshd[5105]: Connection closed by 139.178.68.195 port 45152 Nov 5 00:16:11.839465 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:11.846416 systemd[1]: sshd@15-172.232.14.37:22-139.178.68.195:45152.service: Deactivated successfully. Nov 5 00:16:11.850040 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 00:16:11.853012 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Nov 5 00:16:11.855908 systemd-logind[1595]: Removed session 16. Nov 5 00:16:11.898397 systemd[1]: Started sshd@16-172.232.14.37:22-139.178.68.195:45162.service - OpenSSH per-connection server daemon (139.178.68.195:45162). Nov 5 00:16:12.257510 sshd[5115]: Accepted publickey for core from 139.178.68.195 port 45162 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:12.262100 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:12.269885 systemd-logind[1595]: New session 17 of user core. Nov 5 00:16:12.278428 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 00:16:13.511384 sshd[5118]: Connection closed by 139.178.68.195 port 45162 Nov 5 00:16:13.513763 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:13.524826 systemd[1]: sshd@16-172.232.14.37:22-139.178.68.195:45162.service: Deactivated successfully. Nov 5 00:16:13.530613 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 00:16:13.533289 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Nov 5 00:16:13.537365 systemd-logind[1595]: Removed session 17. Nov 5 00:16:13.578543 systemd[1]: Started sshd@17-172.232.14.37:22-139.178.68.195:50530.service - OpenSSH per-connection server daemon (139.178.68.195:50530). Nov 5 00:16:13.957362 sshd[5136]: Accepted publickey for core from 139.178.68.195 port 50530 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:13.961627 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:13.970807 systemd-logind[1595]: New session 18 of user core. Nov 5 00:16:13.977844 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 00:16:14.515276 sshd[5140]: Connection closed by 139.178.68.195 port 50530 Nov 5 00:16:14.517475 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:14.528023 systemd[1]: sshd@17-172.232.14.37:22-139.178.68.195:50530.service: Deactivated successfully. Nov 5 00:16:14.536582 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 00:16:14.541321 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Nov 5 00:16:14.545360 systemd-logind[1595]: Removed session 18. Nov 5 00:16:14.579127 systemd[1]: Started sshd@18-172.232.14.37:22-139.178.68.195:50544.service - OpenSSH per-connection server daemon (139.178.68.195:50544). Nov 5 00:16:14.951954 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 50544 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:14.953487 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:14.965743 systemd-logind[1595]: New session 19 of user core. Nov 5 00:16:14.971045 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 00:16:15.365422 sshd[5153]: Connection closed by 139.178.68.195 port 50544 Nov 5 00:16:15.370091 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:15.385693 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Nov 5 00:16:15.386880 systemd[1]: sshd@18-172.232.14.37:22-139.178.68.195:50544.service: Deactivated successfully. Nov 5 00:16:15.396256 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 00:16:15.400956 systemd-logind[1595]: Removed session 19. Nov 5 00:16:15.579077 containerd[1625]: time="2025-11-05T00:16:15.576171369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c6c29fc2a340526bb8f3f24aa54b5baeec35f2ea323ccdf3abf8453185a6328\" id:\"b048a52708498ecfc3065e0dffd226e571a5b4f01a70afde7621d0d3544e9113\" pid:5175 exited_at:{seconds:1762301775 nanos:573223305}" Nov 5 00:16:15.690992 kubelet[2883]: E1105 00:16:15.690823 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:16:16.698913 kubelet[2883]: E1105 00:16:16.697798 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-666f7c64f9-pjzbv" podUID="dca6511f-77a2-4cca-9f19-2aca1b8d75e8" Nov 5 00:16:18.694681 kubelet[2883]: E1105 00:16:18.694546 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kf7jk" podUID="6ee5090e-a223-462e-845a-5c7f9446afa1" Nov 5 00:16:18.699528 kubelet[2883]: E1105 00:16:18.698937 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c555f98cb-r98t2" podUID="2078fd5e-a067-4d0f-9d6a-ce64f9873547" Nov 5 00:16:19.694678 kubelet[2883]: E1105 00:16:19.694516 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rhc65" podUID="14de6d4c-7243-4b75-9a89-9c47bcb946c9" Nov 5 00:16:20.435651 systemd[1]: Started sshd@19-172.232.14.37:22-139.178.68.195:50560.service - OpenSSH per-connection server daemon (139.178.68.195:50560). Nov 5 00:16:20.802681 sshd[5197]: Accepted publickey for core from 139.178.68.195 port 50560 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:20.805118 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:20.817649 systemd-logind[1595]: New session 20 of user core. Nov 5 00:16:20.825544 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 00:16:21.222577 sshd[5200]: Connection closed by 139.178.68.195 port 50560 Nov 5 00:16:21.222575 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:21.229913 systemd[1]: sshd@19-172.232.14.37:22-139.178.68.195:50560.service: Deactivated successfully. Nov 5 00:16:21.233980 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 00:16:21.238759 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Nov 5 00:16:21.241995 systemd-logind[1595]: Removed session 20. Nov 5 00:16:23.690452 kubelet[2883]: E1105 00:16:23.690398 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Nov 5 00:16:23.695653 kubelet[2883]: E1105 00:16:23.695592 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-78sgr" podUID="346c021e-f948-4f90-b480-e046118d7005" Nov 5 00:16:24.694995 kubelet[2883]: E1105 00:16:24.694413 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b875bb7d7-9tw6k" podUID="78cc6732-2ab7-4966-83c9-5b3b3e112a51" Nov 5 00:16:26.297526 systemd[1]: Started sshd@20-172.232.14.37:22-139.178.68.195:45210.service - OpenSSH per-connection server daemon (139.178.68.195:45210). Nov 5 00:16:26.656453 sshd[5214]: Accepted publickey for core from 139.178.68.195 port 45210 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:16:26.658325 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:16:26.665297 systemd-logind[1595]: New session 21 of user core. Nov 5 00:16:26.675613 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 00:16:27.003374 sshd[5217]: Connection closed by 139.178.68.195 port 45210 Nov 5 00:16:27.005645 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Nov 5 00:16:27.012425 systemd[1]: sshd@20-172.232.14.37:22-139.178.68.195:45210.service: Deactivated successfully. Nov 5 00:16:27.016211 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 00:16:27.018788 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Nov 5 00:16:27.022786 systemd-logind[1595]: Removed session 21.