Nov 24 00:08:48.902208 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:08:48.902230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:08:48.902239 kernel: BIOS-provided physical RAM map: Nov 24 00:08:48.902245 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 24 00:08:48.902251 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 24 00:08:48.902256 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 00:08:48.902265 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 24 00:08:48.902271 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 24 00:08:48.902277 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 00:08:48.902283 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 00:08:48.902289 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 00:08:48.902295 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 00:08:48.902301 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 24 00:08:48.902307 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 00:08:48.902316 kernel: NX (Execute Disable) protection: active Nov 24 00:08:48.902322 kernel: APIC: Static calls initialized Nov 24 00:08:48.902329 kernel: SMBIOS 2.8 present. Nov 24 00:08:48.902335 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 24 00:08:48.902341 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:08:48.902347 kernel: Hypervisor detected: KVM Nov 24 00:08:48.902356 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 24 00:08:48.902362 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:08:48.902368 kernel: kvm-clock: using sched offset of 7084774750 cycles Nov 24 00:08:48.902374 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:08:48.902381 kernel: tsc: Detected 2000.000 MHz processor Nov 24 00:08:48.902388 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:08:48.902395 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:08:48.902401 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 24 00:08:48.902408 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 00:08:48.902415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:08:48.902423 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 24 00:08:48.902430 kernel: Using GB pages for direct mapping Nov 24 00:08:48.902436 kernel: ACPI: Early table checksum verification disabled Nov 24 00:08:48.902442 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 24 00:08:48.902449 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902455 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902488 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902495 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 24 00:08:48.902502 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902511 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902521 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902527 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:08:48.902534 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 24 00:08:48.902541 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 24 00:08:48.902550 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 24 00:08:48.902557 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 24 00:08:48.902563 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 24 00:08:48.902570 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 24 00:08:48.902577 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 24 00:08:48.902583 kernel: No NUMA configuration found Nov 24 00:08:48.902590 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 24 00:08:48.902597 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Nov 24 00:08:48.902603 kernel: Zone ranges: Nov 24 00:08:48.902612 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:08:48.902619 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 24 00:08:48.902625 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 24 00:08:48.902632 kernel: Device empty Nov 24 00:08:48.902639 kernel: Movable zone start for each node Nov 24 00:08:48.902646 kernel: Early memory node ranges Nov 24 00:08:48.902652 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 00:08:48.902659 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 24 00:08:48.902665 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 24 00:08:48.902672 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 24 00:08:48.902681 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:08:48.902687 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 00:08:48.902694 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 24 00:08:48.902701 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 00:08:48.902708 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:08:48.902714 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:08:48.902721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 00:08:48.902728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:08:48.902735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:08:48.902743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:08:48.902750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:08:48.902757 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:08:48.902763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:08:48.902770 kernel: TSC deadline timer available Nov 24 00:08:48.902777 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:08:48.902783 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:08:48.902790 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:08:48.902796 kernel: CPU topo: Max. threads per core: 1 Nov 24 00:08:48.902805 kernel: CPU topo: Num. cores per package: 2 Nov 24 00:08:48.902812 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:08:48.902818 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:08:48.902825 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:08:48.902832 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 00:08:48.902838 kernel: kvm-guest: setup PV sched yield Nov 24 00:08:48.902845 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 00:08:48.902852 kernel: Booting paravirtualized kernel on KVM Nov 24 00:08:48.902858 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:08:48.902867 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:08:48.902874 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:08:48.902880 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:08:48.902887 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:08:48.902894 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:08:48.902900 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:08:48.902908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:08:48.902915 kernel: random: crng init done Nov 24 00:08:48.902924 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:08:48.902931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:08:48.902937 kernel: Fallback order for Node 0: 0 Nov 24 00:08:48.902944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 24 00:08:48.902951 kernel: Policy zone: Normal Nov 24 00:08:48.902957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:08:48.902964 kernel: software IO TLB: area num 2. Nov 24 00:08:48.902971 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:08:48.902977 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:08:48.902986 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:08:48.902993 kernel: Dynamic Preempt: voluntary Nov 24 00:08:48.902999 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:08:48.903007 kernel: rcu: RCU event tracing is enabled. Nov 24 00:08:48.903014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:08:48.903021 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:08:48.903028 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:08:48.903034 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:08:48.903041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:08:48.903048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:08:48.903057 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:08:48.903070 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:08:48.903079 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:08:48.903086 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 00:08:48.903093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:08:48.903100 kernel: Console: colour VGA+ 80x25 Nov 24 00:08:48.903107 kernel: printk: legacy console [tty0] enabled Nov 24 00:08:48.903114 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:08:48.903121 kernel: ACPI: Core revision 20240827 Nov 24 00:08:48.903130 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 00:08:48.903137 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:08:48.903144 kernel: x2apic enabled Nov 24 00:08:48.903151 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:08:48.903158 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 00:08:48.903165 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 00:08:48.903172 kernel: kvm-guest: setup PV IPIs Nov 24 00:08:48.903181 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 00:08:48.903188 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 24 00:08:48.903195 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 24 00:08:48.903202 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:08:48.903209 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 00:08:48.903216 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 00:08:48.903223 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:08:48.903230 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:08:48.903237 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:08:48.903246 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 24 00:08:48.903253 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 00:08:48.903260 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 00:08:48.903267 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 00:08:48.903275 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 00:08:48.903282 kernel: active return thunk: srso_alias_return_thunk Nov 24 00:08:48.903289 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 00:08:48.903296 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 24 00:08:48.903305 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:08:48.903312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:08:48.903319 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:08:48.903326 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:08:48.903333 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 24 00:08:48.903340 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:08:48.903347 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 24 00:08:48.903354 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 24 00:08:48.903361 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:08:48.903370 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:08:48.903377 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:08:48.903383 kernel: landlock: Up and running. Nov 24 00:08:48.903390 kernel: SELinux: Initializing. Nov 24 00:08:48.903397 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:08:48.903404 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:08:48.903411 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 24 00:08:48.903418 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 00:08:48.903425 kernel: ... version: 0 Nov 24 00:08:48.903434 kernel: ... bit width: 48 Nov 24 00:08:48.903441 kernel: ... generic registers: 6 Nov 24 00:08:48.903448 kernel: ... value mask: 0000ffffffffffff Nov 24 00:08:48.903455 kernel: ... max period: 00007fffffffffff Nov 24 00:08:48.903716 kernel: ... fixed-purpose events: 0 Nov 24 00:08:48.903727 kernel: ... event mask: 000000000000003f Nov 24 00:08:48.903734 kernel: signal: max sigframe size: 3376 Nov 24 00:08:48.903741 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:08:48.903749 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:08:48.903760 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:08:48.903767 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:08:48.903774 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:08:48.903781 kernel: .... node #0, CPUs: #1 Nov 24 00:08:48.903788 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:08:48.903795 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 24 00:08:48.903802 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 235480K reserved, 0K cma-reserved) Nov 24 00:08:48.903809 kernel: devtmpfs: initialized Nov 24 00:08:48.903816 kernel: x86/mm: Memory block size: 128MB Nov 24 00:08:48.903826 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:08:48.903833 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:08:48.903840 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:08:48.903847 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:08:48.903854 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:08:48.903861 kernel: audit: type=2000 audit(1763942925.280:1): state=initialized audit_enabled=0 res=1 Nov 24 00:08:48.903868 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:08:48.903875 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:08:48.903882 kernel: cpuidle: using governor menu Nov 24 00:08:48.903891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:08:48.903898 kernel: dca service started, version 1.12.1 Nov 24 00:08:48.903905 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 00:08:48.903912 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 00:08:48.903919 kernel: PCI: Using configuration type 1 for base access Nov 24 00:08:48.903926 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:08:48.903933 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:08:48.903940 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:08:48.903947 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:08:48.903956 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:08:48.903963 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:08:48.903970 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:08:48.903977 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:08:48.903985 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:08:48.903992 kernel: ACPI: Interpreter enabled Nov 24 00:08:48.903999 kernel: ACPI: PM: (supports S0 S3 S5) Nov 24 00:08:48.904006 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:08:48.904013 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:08:48.904022 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:08:48.904029 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 00:08:48.904036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:08:48.904208 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:08:48.904338 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 00:08:48.904831 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 00:08:48.904845 kernel: PCI host bridge to bus 0000:00 Nov 24 00:08:48.904988 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:08:48.905120 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:08:48.905233 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:08:48.905344 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 24 00:08:48.905454 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 00:08:48.906275 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 24 00:08:48.906393 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:08:48.906567 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:08:48.906706 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:08:48.906833 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 24 00:08:48.906953 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 24 00:08:48.907073 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 24 00:08:48.907193 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:08:48.907323 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 24 00:08:48.907451 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 24 00:08:48.911179 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 24 00:08:48.911309 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 24 00:08:48.911440 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 00:08:48.911623 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 24 00:08:48.911753 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 24 00:08:48.911880 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 24 00:08:48.912001 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 24 00:08:48.912129 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:08:48.912251 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 00:08:48.912384 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 00:08:48.912562 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 24 00:08:48.912685 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 24 00:08:48.912818 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 00:08:48.912938 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 00:08:48.912948 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:08:48.912955 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:08:48.912962 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:08:48.912969 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:08:48.912976 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 00:08:48.912983 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 00:08:48.912994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 00:08:48.913001 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 00:08:48.913007 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 00:08:48.913014 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 00:08:48.913021 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 00:08:48.913028 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 00:08:48.913035 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 00:08:48.913042 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 00:08:48.913049 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 00:08:48.913058 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 00:08:48.913065 kernel: iommu: Default domain type: Translated Nov 24 00:08:48.913072 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:08:48.913079 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:08:48.913086 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:08:48.913093 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 24 00:08:48.913100 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 24 00:08:48.913220 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 00:08:48.913343 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 00:08:48.914182 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:08:48.914197 kernel: vgaarb: loaded Nov 24 00:08:48.914205 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 00:08:48.914212 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 00:08:48.914219 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:08:48.914226 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:08:48.914234 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:08:48.914241 kernel: pnp: PnP ACPI init Nov 24 00:08:48.914387 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 00:08:48.914398 kernel: pnp: PnP ACPI: found 5 devices Nov 24 00:08:48.914406 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:08:48.914413 kernel: NET: Registered PF_INET protocol family Nov 24 00:08:48.914420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:08:48.914427 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 00:08:48.914434 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:08:48.914441 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:08:48.914451 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 00:08:48.914484 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 00:08:48.914492 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:08:48.914499 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:08:48.914506 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:08:48.914513 kernel: NET: Registered PF_XDP protocol family Nov 24 00:08:48.914633 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:08:48.914746 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:08:48.914857 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:08:48.914973 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 24 00:08:48.915111 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 00:08:48.915224 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 24 00:08:48.915233 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:08:48.915240 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 00:08:48.915248 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 24 00:08:48.915255 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 24 00:08:48.915262 kernel: Initialise system trusted keyrings Nov 24 00:08:48.915272 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 00:08:48.915279 kernel: Key type asymmetric registered Nov 24 00:08:48.915286 kernel: Asymmetric key parser 'x509' registered Nov 24 00:08:48.915293 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:08:48.915300 kernel: io scheduler mq-deadline registered Nov 24 00:08:48.915307 kernel: io scheduler kyber registered Nov 24 00:08:48.915314 kernel: io scheduler bfq registered Nov 24 00:08:48.915321 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:08:48.915329 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 00:08:48.915338 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 00:08:48.915345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:08:48.915352 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:08:48.915359 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:08:48.915366 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:08:48.915373 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:08:48.915758 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 24 00:08:48.915773 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:08:48.915894 kernel: rtc_cmos 00:03: registered as rtc0 Nov 24 00:08:48.916017 kernel: rtc_cmos 00:03: setting system clock to 2025-11-24T00:08:48 UTC (1763942928) Nov 24 00:08:48.916131 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 24 00:08:48.916140 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 00:08:48.916147 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:08:48.916154 kernel: Segment Routing with IPv6 Nov 24 00:08:48.916161 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:08:48.916168 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:08:48.916175 kernel: Key type dns_resolver registered Nov 24 00:08:48.916185 kernel: IPI shorthand broadcast: enabled Nov 24 00:08:48.916192 kernel: sched_clock: Marking stable (2778003880, 343416700)->(3217064080, -95643500) Nov 24 00:08:48.916199 kernel: registered taskstats version 1 Nov 24 00:08:48.916206 kernel: Loading compiled-in X.509 certificates Nov 24 00:08:48.916213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:08:48.916220 kernel: Demotion targets for Node 0: null Nov 24 00:08:48.916227 kernel: Key type .fscrypt registered Nov 24 00:08:48.916234 kernel: Key type fscrypt-provisioning registered Nov 24 00:08:48.916241 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:08:48.916250 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:08:48.916257 kernel: ima: No architecture policies found Nov 24 00:08:48.916264 kernel: clk: Disabling unused clocks Nov 24 00:08:48.916271 kernel: Warning: unable to open an initial console. Nov 24 00:08:48.916279 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:08:48.916286 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:08:48.916293 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:08:48.916300 kernel: Run /init as init process Nov 24 00:08:48.916307 kernel: with arguments: Nov 24 00:08:48.916315 kernel: /init Nov 24 00:08:48.916323 kernel: with environment: Nov 24 00:08:48.916343 kernel: HOME=/ Nov 24 00:08:48.916352 kernel: TERM=linux Nov 24 00:08:48.916361 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:08:48.916371 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:08:48.916379 systemd[1]: Detected virtualization kvm. Nov 24 00:08:48.916388 systemd[1]: Detected architecture x86-64. Nov 24 00:08:48.916395 systemd[1]: Running in initrd. Nov 24 00:08:48.916403 systemd[1]: No hostname configured, using default hostname. Nov 24 00:08:48.916411 systemd[1]: Hostname set to . Nov 24 00:08:48.916418 systemd[1]: Initializing machine ID from random generator. Nov 24 00:08:48.916426 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:08:48.916433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:08:48.916441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:08:48.916451 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:08:48.916459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:08:48.916484 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:08:48.916492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:08:48.916501 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:08:48.916509 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:08:48.916516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:08:48.916526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:08:48.916534 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:08:48.916541 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:08:48.916549 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:08:48.916556 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:08:48.916564 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:08:48.916572 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:08:48.916579 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:08:48.916587 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:08:48.916596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:08:48.916604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:08:48.916614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:08:48.916621 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:08:48.916629 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:08:48.916639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:08:48.916646 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:08:48.916654 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:08:48.916664 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:08:48.916671 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:08:48.916679 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:08:48.916687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:08:48.916717 systemd-journald[187]: Collecting audit messages is disabled. Nov 24 00:08:48.916737 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:08:48.916746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:08:48.916756 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:08:48.916764 systemd-journald[187]: Journal started Nov 24 00:08:48.916780 systemd-journald[187]: Runtime Journal (/run/log/journal/488ae144c86c4db292208028ddbca24a) is 8M, max 78.2M, 70.2M free. Nov 24 00:08:48.895436 systemd-modules-load[188]: Inserted module 'overlay' Nov 24 00:08:48.929316 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:08:48.929341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:08:48.931489 kernel: Bridge firewalling registered Nov 24 00:08:48.931299 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 24 00:08:49.014828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:08:49.037083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:08:49.041065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:08:49.043567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:08:49.050670 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:08:49.057293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:08:49.062675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:08:49.067235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:08:49.071579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:08:49.076161 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:08:49.077826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:08:49.080586 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:08:49.088636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:08:49.089669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:08:49.094572 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:08:49.107819 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:08:49.137642 systemd-resolved[226]: Positive Trust Anchors: Nov 24 00:08:49.137657 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:08:49.137684 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:08:49.140840 systemd-resolved[226]: Defaulting to hostname 'linux'. Nov 24 00:08:49.142530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:08:49.146280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:08:49.219517 kernel: SCSI subsystem initialized Nov 24 00:08:49.231484 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:08:49.243493 kernel: iscsi: registered transport (tcp) Nov 24 00:08:49.270073 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:08:49.270134 kernel: QLogic iSCSI HBA Driver Nov 24 00:08:49.288544 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:08:49.306443 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:08:49.309195 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:08:49.353505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:08:49.355703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:08:49.410510 kernel: raid6: avx2x4 gen() 32440 MB/s Nov 24 00:08:49.428506 kernel: raid6: avx2x2 gen() 31747 MB/s Nov 24 00:08:49.446736 kernel: raid6: avx2x1 gen() 22706 MB/s Nov 24 00:08:49.446796 kernel: raid6: using algorithm avx2x4 gen() 32440 MB/s Nov 24 00:08:49.469505 kernel: raid6: .... xor() 5073 MB/s, rmw enabled Nov 24 00:08:49.469561 kernel: raid6: using avx2x2 recovery algorithm Nov 24 00:08:49.494494 kernel: xor: automatically using best checksumming function avx Nov 24 00:08:49.634493 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:08:49.641348 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:08:49.644433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:08:49.672130 systemd-udevd[436]: Using default interface naming scheme 'v255'. Nov 24 00:08:49.677924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:08:49.682621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:08:49.700315 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Nov 24 00:08:49.726560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:08:49.729423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:08:49.802147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:08:49.805720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:08:49.877513 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:08:49.882497 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 24 00:08:50.006606 kernel: libata version 3.00 loaded. Nov 24 00:08:50.012490 kernel: scsi host0: Virtio SCSI HBA Nov 24 00:08:50.015483 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 00:08:50.034058 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 00:08:50.034078 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 00:08:50.034233 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 24 00:08:50.034259 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 00:08:50.034407 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 00:08:50.051455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:08:50.077072 kernel: scsi host1: ahci Nov 24 00:08:50.077278 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 00:08:50.077291 kernel: scsi host2: ahci Nov 24 00:08:50.051602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:08:50.075192 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:08:50.079634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:08:50.094316 kernel: AES CTR mode by8 optimization enabled Nov 24 00:08:50.094334 kernel: scsi host3: ahci Nov 24 00:08:50.094540 kernel: scsi host4: ahci Nov 24 00:08:50.080909 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:08:50.102025 kernel: scsi host5: ahci Nov 24 00:08:50.109504 kernel: scsi host6: ahci Nov 24 00:08:50.109698 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Nov 24 00:08:50.114433 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Nov 24 00:08:50.114474 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Nov 24 00:08:50.120498 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Nov 24 00:08:50.120522 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Nov 24 00:08:50.125128 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Nov 24 00:08:50.136444 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 24 00:08:50.136662 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 24 00:08:50.141477 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 24 00:08:50.141657 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 24 00:08:50.141816 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 24 00:08:50.148621 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:08:50.148641 kernel: GPT:9289727 != 167739391 Nov 24 00:08:50.148652 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:08:50.148662 kernel: GPT:9289727 != 167739391 Nov 24 00:08:50.148672 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:08:50.148681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:08:50.148691 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 24 00:08:50.264357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:08:50.438663 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.438731 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.438743 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.439478 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.441488 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.446483 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 00:08:50.493495 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 24 00:08:50.503208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 24 00:08:50.523036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 24 00:08:50.524022 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:08:50.538095 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 24 00:08:50.538905 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 24 00:08:50.541494 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:08:50.542336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:08:50.544095 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:08:50.547594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:08:50.549858 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:08:50.569178 disk-uuid[614]: Primary Header is updated. Nov 24 00:08:50.569178 disk-uuid[614]: Secondary Entries is updated. Nov 24 00:08:50.569178 disk-uuid[614]: Secondary Header is updated. Nov 24 00:08:50.576031 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:08:50.580642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:08:50.589488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:08:51.597499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:08:51.597555 disk-uuid[616]: The operation has completed successfully. Nov 24 00:08:51.647111 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:08:51.647229 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:08:51.671379 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:08:51.685542 sh[636]: Success Nov 24 00:08:51.705549 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:08:51.705588 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:08:51.708832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:08:51.718486 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:08:51.757156 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:08:51.758640 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:08:51.768341 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:08:51.779501 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (648) Nov 24 00:08:51.783609 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:08:51.783642 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:08:51.793668 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:08:51.793693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:08:51.797774 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:08:51.799414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:08:51.800421 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:08:51.801535 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:08:51.802170 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:08:51.806094 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:08:51.831645 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (677) Nov 24 00:08:51.831669 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:08:51.834565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:08:51.841079 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:08:51.841102 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:08:51.841113 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:08:51.850535 kernel: BTRFS info (device sda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:08:51.850988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:08:51.854576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:08:51.934369 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:08:51.938958 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:08:51.962641 ignition[743]: Ignition 2.22.0 Nov 24 00:08:51.963521 ignition[743]: Stage: fetch-offline Nov 24 00:08:51.963556 ignition[743]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:51.963565 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:51.963639 ignition[743]: parsed url from cmdline: "" Nov 24 00:08:51.963644 ignition[743]: no config URL provided Nov 24 00:08:51.963649 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:08:51.969446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:08:51.963657 ignition[743]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:08:51.963662 ignition[743]: failed to fetch config: resource requires networking Nov 24 00:08:51.963800 ignition[743]: Ignition finished successfully Nov 24 00:08:51.985985 systemd-networkd[822]: lo: Link UP Nov 24 00:08:51.985998 systemd-networkd[822]: lo: Gained carrier Nov 24 00:08:51.987560 systemd-networkd[822]: Enumeration completed Nov 24 00:08:51.987954 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:08:51.987959 systemd-networkd[822]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:08:51.988085 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:08:51.989698 systemd-networkd[822]: eth0: Link UP Nov 24 00:08:51.990192 systemd-networkd[822]: eth0: Gained carrier Nov 24 00:08:51.990201 systemd-networkd[822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:08:51.990606 systemd[1]: Reached target network.target - Network. Nov 24 00:08:51.995577 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:08:52.026650 ignition[827]: Ignition 2.22.0 Nov 24 00:08:52.026666 ignition[827]: Stage: fetch Nov 24 00:08:52.026781 ignition[827]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:52.026792 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:52.026865 ignition[827]: parsed url from cmdline: "" Nov 24 00:08:52.026869 ignition[827]: no config URL provided Nov 24 00:08:52.026874 ignition[827]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:08:52.026883 ignition[827]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:08:52.026905 ignition[827]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 24 00:08:52.027039 ignition[827]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:08:52.227325 ignition[827]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 24 00:08:52.227870 ignition[827]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:08:52.629006 ignition[827]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 24 00:08:52.630074 ignition[827]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:08:52.804543 systemd-networkd[822]: eth0: DHCPv4 address 172.237.134.153/24, gateway 172.237.134.1 acquired from 23.205.167.181 Nov 24 00:08:53.430433 ignition[827]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 24 00:08:53.525742 ignition[827]: PUT result: OK Nov 24 00:08:53.525815 ignition[827]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 24 00:08:53.633727 systemd-networkd[822]: eth0: Gained IPv6LL Nov 24 00:08:53.638326 ignition[827]: GET result: OK Nov 24 00:08:53.638433 ignition[827]: parsing config with SHA512: e776533f3cd72ea249b9849e4b77a980c18f8678d5a980e7679b5b075e72eb5b791f00e4d6f837fba2368ecc76ddddedfc420554270a99e960ba18fb14d23b7d Nov 24 00:08:53.641762 unknown[827]: fetched base config from "system" Nov 24 00:08:53.641776 unknown[827]: fetched base config from "system" Nov 24 00:08:53.643824 ignition[827]: fetch: fetch complete Nov 24 00:08:53.643502 unknown[827]: fetched user config from "akamai" Nov 24 00:08:53.643830 ignition[827]: fetch: fetch passed Nov 24 00:08:53.646442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:08:53.643876 ignition[827]: Ignition finished successfully Nov 24 00:08:53.667573 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:08:53.703170 ignition[834]: Ignition 2.22.0 Nov 24 00:08:53.703185 ignition[834]: Stage: kargs Nov 24 00:08:53.703311 ignition[834]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:53.703321 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:53.703963 ignition[834]: kargs: kargs passed Nov 24 00:08:53.704004 ignition[834]: Ignition finished successfully Nov 24 00:08:53.706727 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:08:53.709593 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:08:53.734131 ignition[841]: Ignition 2.22.0 Nov 24 00:08:53.734146 ignition[841]: Stage: disks Nov 24 00:08:53.734259 ignition[841]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:53.734269 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:53.736488 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:08:53.734831 ignition[841]: disks: disks passed Nov 24 00:08:53.737698 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:08:53.734868 ignition[841]: Ignition finished successfully Nov 24 00:08:53.739210 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:08:53.740717 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:08:53.742004 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:08:53.743575 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:08:53.745782 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:08:53.780860 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:08:53.784628 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:08:53.787600 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:08:53.893488 kernel: EXT4-fs (sda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:08:53.893873 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:08:53.894929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:08:53.897105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:08:53.900529 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:08:53.902216 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:08:53.902927 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:08:53.902950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:08:53.908393 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:08:53.911542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:08:53.918490 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (857) Nov 24 00:08:53.926267 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:08:53.926294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:08:53.933157 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:08:53.933179 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:08:53.933191 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:08:53.937656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:08:53.967512 initrd-setup-root[881]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:08:53.973214 initrd-setup-root[888]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:08:53.977588 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:08:53.981889 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:08:54.067832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:08:54.070688 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:08:54.072802 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:08:54.085738 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:08:54.090503 kernel: BTRFS info (device sda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:08:54.105148 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:08:54.117326 ignition[971]: INFO : Ignition 2.22.0 Nov 24 00:08:54.117326 ignition[971]: INFO : Stage: mount Nov 24 00:08:54.120274 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:54.120274 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:54.120274 ignition[971]: INFO : mount: mount passed Nov 24 00:08:54.120274 ignition[971]: INFO : Ignition finished successfully Nov 24 00:08:54.120882 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:08:54.124543 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:08:54.895825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:08:54.920501 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (982) Nov 24 00:08:54.920533 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:08:54.925843 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:08:54.930896 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:08:54.930926 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:08:54.934747 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:08:54.936573 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:08:54.967122 ignition[998]: INFO : Ignition 2.22.0 Nov 24 00:08:54.967122 ignition[998]: INFO : Stage: files Nov 24 00:08:54.968853 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:54.968853 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:54.968853 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:08:54.968853 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:08:54.968853 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:08:54.974073 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:08:54.974073 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:08:54.974073 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:08:54.972101 unknown[998]: wrote ssh authorized keys file for user: core Nov 24 00:08:54.977835 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:08:54.977835 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:08:55.176145 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:08:55.422290 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:08:55.423662 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:08:55.432018 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:08:55.926402 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:08:56.449416 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:08:56.451144 ignition[998]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:08:56.451144 ignition[998]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:08:56.453899 ignition[998]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:08:56.453899 ignition[998]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:08:56.453899 ignition[998]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 24 00:08:56.458878 ignition[998]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 24 00:08:56.458878 ignition[998]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 24 00:08:56.458878 ignition[998]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 24 00:08:56.458878 ignition[998]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:08:56.458878 ignition[998]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:08:56.458878 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:08:56.458878 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:08:56.458878 ignition[998]: INFO : files: files passed Nov 24 00:08:56.458878 ignition[998]: INFO : Ignition finished successfully Nov 24 00:08:56.459756 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:08:56.462597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:08:56.465372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:08:56.474654 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:08:56.474756 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:08:56.482839 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:08:56.483970 initrd-setup-root-after-ignition[1029]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:08:56.485651 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:08:56.487582 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:08:56.489586 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:08:56.491958 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:08:56.537383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:08:56.537521 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:08:56.539344 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:08:56.540764 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:08:56.542338 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:08:56.543071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:08:56.558835 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:08:56.561809 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:08:56.581642 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:08:56.583337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:08:56.584159 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:08:56.584953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:08:56.585051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:08:56.587027 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:08:56.588054 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:08:56.589438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:08:56.591082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:08:56.592546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:08:56.593972 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:08:56.595606 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:08:56.597192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:08:56.598833 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:08:56.600389 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:08:56.601952 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:08:56.603490 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:08:56.603591 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:08:56.605509 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:08:56.606603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:08:56.607916 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:08:56.608563 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:08:56.609430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:08:56.609594 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:08:56.611566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:08:56.611718 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:08:56.612714 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:08:56.612844 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:08:56.616546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:08:56.618281 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:08:56.620045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:08:56.620196 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:08:56.621570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:08:56.621666 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:08:56.629358 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:08:56.629491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:08:56.649223 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:08:56.657712 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:08:56.677284 ignition[1053]: INFO : Ignition 2.22.0 Nov 24 00:08:56.677284 ignition[1053]: INFO : Stage: umount Nov 24 00:08:56.677284 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:08:56.677284 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:08:56.677284 ignition[1053]: INFO : umount: umount passed Nov 24 00:08:56.677284 ignition[1053]: INFO : Ignition finished successfully Nov 24 00:08:56.657825 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:08:56.675571 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:08:56.675679 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:08:56.677286 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:08:56.677359 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:08:56.678315 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:08:56.678368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:08:56.679680 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:08:56.679730 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:08:56.681031 systemd[1]: Stopped target network.target - Network. Nov 24 00:08:56.682539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:08:56.682595 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:08:56.684178 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:08:56.685549 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:08:56.687638 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:08:56.688492 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:08:56.689883 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:08:56.691288 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:08:56.691333 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:08:56.692652 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:08:56.692695 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:08:56.693990 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:08:56.694044 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:08:56.695369 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:08:56.695418 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:08:56.696766 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:08:56.696817 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:08:56.698308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:08:56.699776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:08:56.704632 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:08:56.704766 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:08:56.707580 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:08:56.707810 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:08:56.707934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:08:56.711911 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:08:56.712386 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:08:56.713742 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:08:56.713787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:08:56.715899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:08:56.717911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:08:56.717966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:08:56.720272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:08:56.720331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:08:56.723381 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:08:56.723432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:08:56.724506 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:08:56.724555 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:08:56.727256 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:08:56.731434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:08:56.731528 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:08:56.740210 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:08:56.740328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:08:56.745813 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:08:56.745988 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:08:56.747552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:08:56.747621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:08:56.748944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:08:56.748985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:08:56.750538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:08:56.750589 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:08:56.752795 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:08:56.752844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:08:56.754243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:08:56.754291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:08:56.757592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:08:56.758419 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:08:56.758491 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:08:56.761554 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:08:56.761603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:08:56.763060 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 24 00:08:56.763114 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:08:56.764591 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:08:56.764637 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:08:56.765814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:08:56.765863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:08:56.769769 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:08:56.769827 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 24 00:08:56.769871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:08:56.769914 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:08:56.775925 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:08:56.776034 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:08:56.777076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:08:56.779166 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:08:56.794187 systemd[1]: Switching root. Nov 24 00:08:56.822559 systemd-journald[187]: Journal stopped Nov 24 00:08:57.993300 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 24 00:08:57.993326 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:08:57.993338 kernel: SELinux: policy capability open_perms=1 Nov 24 00:08:57.993347 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:08:57.993356 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:08:57.993367 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:08:57.993377 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:08:57.993386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:08:57.993395 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:08:57.993404 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:08:57.993414 kernel: audit: type=1403 audit(1763942936.980:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:08:57.993424 systemd[1]: Successfully loaded SELinux policy in 68.396ms. Nov 24 00:08:57.993437 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.460ms. Nov 24 00:08:57.993448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:08:57.995478 systemd[1]: Detected virtualization kvm. Nov 24 00:08:57.995497 systemd[1]: Detected architecture x86-64. Nov 24 00:08:57.995511 systemd[1]: Detected first boot. Nov 24 00:08:57.995522 systemd[1]: Initializing machine ID from random generator. Nov 24 00:08:57.995532 zram_generator::config[1096]: No configuration found. Nov 24 00:08:57.995543 kernel: Guest personality initialized and is inactive Nov 24 00:08:57.995553 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:08:57.995562 kernel: Initialized host personality Nov 24 00:08:57.995571 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:08:57.995581 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:08:57.995595 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:08:57.995605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:08:57.995615 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:08:57.995625 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:08:57.995635 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:08:57.995645 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:08:57.995655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:08:57.995668 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:08:57.995678 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:08:57.995689 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:08:57.995699 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:08:57.995709 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:08:57.995719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:08:57.995729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:08:57.995739 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:08:57.995752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:08:57.995765 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:08:57.995776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:08:57.995786 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:08:57.995797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:08:57.995809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:08:57.995819 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:08:57.995831 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:08:57.995842 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:08:57.995852 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:08:57.995862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:08:57.995873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:08:57.995883 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:08:57.995893 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:08:57.995903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:08:57.995913 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:08:57.995926 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:08:57.995937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:08:57.995947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:08:57.995957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:08:57.995970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:08:57.995980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:08:57.995990 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:08:57.996001 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:08:57.996011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:57.996022 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:08:57.996033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:08:57.996043 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:08:57.996056 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:08:57.996067 systemd[1]: Reached target machines.target - Containers. Nov 24 00:08:57.996077 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:08:57.996087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:08:57.996098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:08:57.996108 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:08:57.996118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:08:57.996128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:08:57.996139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:08:57.996151 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:08:57.996161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:08:57.996172 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:08:57.996182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:08:57.996192 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:08:57.996203 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:08:57.996213 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:08:57.996223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:08:57.996236 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:08:57.996246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:08:57.996257 kernel: fuse: init (API version 7.41) Nov 24 00:08:57.996267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:08:57.996277 kernel: ACPI: bus type drm_connector registered Nov 24 00:08:57.996287 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:08:57.996297 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:08:57.996307 kernel: loop: module loaded Nov 24 00:08:57.996319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:08:57.996330 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:08:57.996340 systemd[1]: Stopped verity-setup.service. Nov 24 00:08:57.996351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:57.996361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:08:57.996393 systemd-journald[1184]: Collecting audit messages is disabled. Nov 24 00:08:57.996416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:08:57.996427 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:08:57.996437 systemd-journald[1184]: Journal started Nov 24 00:08:57.996456 systemd-journald[1184]: Runtime Journal (/run/log/journal/58113527c7884ee4a59e929dd6ccf56d) is 8M, max 78.2M, 70.2M free. Nov 24 00:08:57.602683 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:08:57.616295 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 24 00:08:57.616861 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:08:57.999509 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:08:57.999765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:08:58.000932 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:08:58.002323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:08:58.003754 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:08:58.004967 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:08:58.006168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:08:58.006499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:08:58.007707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:08:58.008186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:08:58.009272 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:08:58.009757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:08:58.010819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:08:58.011133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:08:58.012311 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:08:58.012618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:08:58.013826 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:08:58.014101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:08:58.015251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:08:58.016497 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:08:58.017785 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:08:58.018913 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:08:58.034191 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:08:58.037593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:08:58.039408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:08:58.040534 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:08:58.041518 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:08:58.043128 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:08:58.047576 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:08:58.049424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:08:58.054313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:08:58.062641 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:08:58.063968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:08:58.070645 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:08:58.072267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:08:58.073709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:08:58.080867 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:08:58.086603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:08:58.090249 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:08:58.092528 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:08:58.106811 systemd-journald[1184]: Time spent on flushing to /var/log/journal/58113527c7884ee4a59e929dd6ccf56d is 35.422ms for 1010 entries. Nov 24 00:08:58.106811 systemd-journald[1184]: System Journal (/var/log/journal/58113527c7884ee4a59e929dd6ccf56d) is 8M, max 195.6M, 187.6M free. Nov 24 00:08:58.160650 systemd-journald[1184]: Received client request to flush runtime journal. Nov 24 00:08:58.160683 kernel: loop0: detected capacity change from 0 to 128560 Nov 24 00:08:58.111783 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:08:58.113172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:08:58.126394 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:08:58.164306 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:08:58.171141 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:08:58.178044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:08:58.182233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:08:58.190694 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Nov 24 00:08:58.191262 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Nov 24 00:08:58.193252 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:08:58.196793 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:08:58.198882 kernel: loop1: detected capacity change from 0 to 8 Nov 24 00:08:58.208832 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:08:58.219498 kernel: loop2: detected capacity change from 0 to 229808 Nov 24 00:08:58.265502 kernel: loop3: detected capacity change from 0 to 110984 Nov 24 00:08:58.265714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:08:58.270713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:08:58.310341 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 24 00:08:58.310364 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 24 00:08:58.317310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:08:58.320592 kernel: loop4: detected capacity change from 0 to 128560 Nov 24 00:08:58.340486 kernel: loop5: detected capacity change from 0 to 8 Nov 24 00:08:58.349490 kernel: loop6: detected capacity change from 0 to 229808 Nov 24 00:08:58.372629 kernel: loop7: detected capacity change from 0 to 110984 Nov 24 00:08:58.389353 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 24 00:08:58.391066 (sd-merge)[1247]: Merged extensions into '/usr'. Nov 24 00:08:58.398096 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:08:58.398201 systemd[1]: Reloading... Nov 24 00:08:58.518517 zram_generator::config[1277]: No configuration found. Nov 24 00:08:58.607518 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:08:58.739139 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:08:58.739518 systemd[1]: Reloading finished in 340 ms. Nov 24 00:08:58.773105 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:08:58.774350 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:08:58.775513 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:08:58.788769 systemd[1]: Starting ensure-sysext.service... Nov 24 00:08:58.791571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:08:58.805414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:08:58.823256 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:08:58.823581 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:08:58.824043 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:08:58.824789 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:08:58.825082 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:08:58.825099 systemd[1]: Reloading... Nov 24 00:08:58.827309 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:08:58.827713 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 00:08:58.828200 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 00:08:58.838760 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:08:58.838791 systemd-tmpfiles[1319]: Skipping /boot Nov 24 00:08:58.857223 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:08:58.861511 systemd-tmpfiles[1319]: Skipping /boot Nov 24 00:08:58.868480 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Nov 24 00:08:58.892795 zram_generator::config[1346]: No configuration found. Nov 24 00:08:59.122489 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 00:08:59.142560 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:08:59.142945 systemd[1]: Reloading finished in 317 ms. Nov 24 00:08:59.144529 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:08:59.151367 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:08:59.152657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:08:59.166536 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 00:08:59.166773 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:08:59.166789 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 00:08:59.182210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:08:59.186576 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:08:59.189632 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:08:59.194077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:08:59.208588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:08:59.210586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:08:59.219510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:08:59.222928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.223676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:08:59.229020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:08:59.236712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:08:59.238689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:08:59.240341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:08:59.240442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:08:59.240539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.246948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.247398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:08:59.248003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:08:59.248124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:08:59.248236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.254627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.254873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:08:59.263521 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:08:59.264363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:08:59.264451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:08:59.264586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:08:59.265291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:08:59.277893 systemd[1]: Finished ensure-sysext.service. Nov 24 00:08:59.283849 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 00:08:59.303300 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:08:59.307425 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:08:59.325108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:08:59.325334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:08:59.335205 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:08:59.336404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:08:59.337895 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:08:59.350313 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:08:59.352406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:08:59.352643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:08:59.353816 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:08:59.354302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:08:59.359143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:08:59.359210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:08:59.359230 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:08:59.367815 augenrules[1480]: No rules Nov 24 00:08:59.369706 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:08:59.370494 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:08:59.392657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:08:59.408482 kernel: EDAC MC: Ver: 3.0.0 Nov 24 00:08:59.450564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 24 00:08:59.460719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:08:59.465455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:08:59.491969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:08:59.650386 systemd-networkd[1430]: lo: Link UP Nov 24 00:08:59.650707 systemd-networkd[1430]: lo: Gained carrier Nov 24 00:08:59.652528 systemd-networkd[1430]: Enumeration completed Nov 24 00:08:59.652661 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:08:59.653114 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:08:59.653176 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:08:59.653862 systemd-networkd[1430]: eth0: Link UP Nov 24 00:08:59.654127 systemd-networkd[1430]: eth0: Gained carrier Nov 24 00:08:59.654186 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:08:59.688972 systemd-resolved[1432]: Positive Trust Anchors: Nov 24 00:08:59.690552 systemd-resolved[1432]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:08:59.690583 systemd-resolved[1432]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:08:59.695532 systemd-resolved[1432]: Defaulting to hostname 'linux'. Nov 24 00:08:59.725754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 00:08:59.726607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:08:59.727719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:08:59.729036 systemd[1]: Reached target network.target - Network. Nov 24 00:08:59.729861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:08:59.730722 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:08:59.731642 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:08:59.732454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:08:59.733362 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:08:59.734124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:08:59.735090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:08:59.735130 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:08:59.735816 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:08:59.736715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:08:59.737816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:08:59.738580 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:08:59.740662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:08:59.742738 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:08:59.745335 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:08:59.746231 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:08:59.747212 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:08:59.750029 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:08:59.751095 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:08:59.753110 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:08:59.756569 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:08:59.757865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:08:59.781242 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:08:59.781938 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:08:59.782776 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:08:59.782809 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:08:59.785480 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:08:59.790591 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:08:59.792749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:08:59.797013 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:08:59.803115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:08:59.805701 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:08:59.806697 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:08:59.808150 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:08:59.813663 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:08:59.819987 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:08:59.821253 jq[1516]: false Nov 24 00:08:59.832383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:08:59.834685 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:08:59.841263 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing passwd entry cache Nov 24 00:08:59.841520 oslogin_cache_refresh[1518]: Refreshing passwd entry cache Nov 24 00:08:59.845537 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting users, quitting Nov 24 00:08:59.845599 oslogin_cache_refresh[1518]: Failure getting users, quitting Nov 24 00:08:59.845661 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:08:59.845703 oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:08:59.845763 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:08:59.845848 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing group entry cache Nov 24 00:08:59.845882 oslogin_cache_refresh[1518]: Refreshing group entry cache Nov 24 00:08:59.848283 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting groups, quitting Nov 24 00:08:59.848283 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:08:59.847518 oslogin_cache_refresh[1518]: Failure getting groups, quitting Nov 24 00:08:59.848307 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:08:59.847529 oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:08:59.848766 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:08:59.851692 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:08:59.854750 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:08:59.859606 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:08:59.860996 extend-filesystems[1517]: Found /dev/sda6 Nov 24 00:08:59.861649 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:08:59.866063 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:08:59.867538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:08:59.868188 jq[1536]: true Nov 24 00:08:59.867902 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:08:59.868117 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:08:59.869182 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:08:59.869886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:08:59.882615 extend-filesystems[1517]: Found /dev/sda9 Nov 24 00:08:59.884389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:08:59.885713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:08:59.901514 extend-filesystems[1517]: Checking size of /dev/sda9 Nov 24 00:08:59.913042 jq[1540]: true Nov 24 00:08:59.921700 update_engine[1535]: I20251124 00:08:59.920918 1535 main.cc:92] Flatcar Update Engine starting Nov 24 00:08:59.933901 extend-filesystems[1517]: Resized partition /dev/sda9 Nov 24 00:08:59.935460 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:08:59.942794 tar[1539]: linux-amd64/LICENSE Nov 24 00:08:59.942794 tar[1539]: linux-amd64/helm Nov 24 00:08:59.950837 extend-filesystems[1563]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:08:59.971736 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 24 00:08:59.971819 coreos-metadata[1513]: Nov 24 00:08:59.971 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 24 00:08:59.969604 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:08:59.969629 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:08:59.970632 systemd-logind[1530]: New seat seat0. Nov 24 00:08:59.972556 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:08:59.995599 dbus-daemon[1514]: [system] SELinux support is enabled Nov 24 00:08:59.996453 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:09:00.000358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:09:00.000391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:09:00.002093 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:09:00.002116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:09:00.012188 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:09:00.018062 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:09:00.020905 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:09:00.030299 update_engine[1535]: I20251124 00:09:00.020733 1535 update_check_scheduler.cc:74] Next update check in 8m12s Nov 24 00:09:00.054180 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:09:00.055681 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:09:00.061653 systemd[1]: Starting sshkeys.service... Nov 24 00:09:00.104237 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 00:09:00.107784 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 00:09:00.278319 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 24 00:09:00.289749 extend-filesystems[1563]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 24 00:09:00.289749 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 24 00:09:00.289749 extend-filesystems[1563]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 24 00:09:00.296439 extend-filesystems[1517]: Resized filesystem in /dev/sda9 Nov 24 00:09:00.291957 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:09:00.293414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:09:00.296187 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:09:00.328048 coreos-metadata[1584]: Nov 24 00:09:00.327 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 24 00:09:00.354267 containerd[1554]: time="2025-11-24T00:09:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:09:00.357478 containerd[1554]: time="2025-11-24T00:09:00.356499200Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:09:00.373280 containerd[1554]: time="2025-11-24T00:09:00.373258040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.12µs" Nov 24 00:09:00.374778 containerd[1554]: time="2025-11-24T00:09:00.374760270Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375492030Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375643620Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375657470Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375714450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375780930Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.375791590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376002690Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376015310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376024790Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376031820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376114130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376399 containerd[1554]: time="2025-11-24T00:09:00.376332950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376653 containerd[1554]: time="2025-11-24T00:09:00.376362260Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:09:00.376653 containerd[1554]: time="2025-11-24T00:09:00.376370470Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:09:00.378698 containerd[1554]: time="2025-11-24T00:09:00.378494560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:09:00.379049 containerd[1554]: time="2025-11-24T00:09:00.378806760Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:09:00.379049 containerd[1554]: time="2025-11-24T00:09:00.378877720Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:09:00.381725 containerd[1554]: time="2025-11-24T00:09:00.381707280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:09:00.381806 containerd[1554]: time="2025-11-24T00:09:00.381793360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:09:00.381896 containerd[1554]: time="2025-11-24T00:09:00.381882660Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383481820Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383500990Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383511770Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383528700Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383539330Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383547670Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383556170Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383563630Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383573740Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383671000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383687040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383705530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383716200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:09:00.383848 containerd[1554]: time="2025-11-24T00:09:00.383725170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383735060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383744860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383754380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383763520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383772640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383781600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383815530Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:09:00.384079 containerd[1554]: time="2025-11-24T00:09:00.383826190Z" level=info msg="Start snapshots syncer" Nov 24 00:09:00.384698 containerd[1554]: time="2025-11-24T00:09:00.384236290Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:09:00.384698 containerd[1554]: time="2025-11-24T00:09:00.384515510Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:09:00.384833 containerd[1554]: time="2025-11-24T00:09:00.384556310Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:09:00.384833 containerd[1554]: time="2025-11-24T00:09:00.384612780Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.384939350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.384963310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.384973040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.384981340Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.384991550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.385001190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.385016740Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.385034350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.385043440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:09:00.385122 containerd[1554]: time="2025-11-24T00:09:00.385052310Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:09:00.385370 containerd[1554]: time="2025-11-24T00:09:00.385354330Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:09:00.385483 containerd[1554]: time="2025-11-24T00:09:00.385454190Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:09:00.385529 containerd[1554]: time="2025-11-24T00:09:00.385517700Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:09:00.385571 containerd[1554]: time="2025-11-24T00:09:00.385559810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:09:00.385607 containerd[1554]: time="2025-11-24T00:09:00.385597790Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:09:00.385660 containerd[1554]: time="2025-11-24T00:09:00.385647510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:09:00.385709 containerd[1554]: time="2025-11-24T00:09:00.385697960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:09:00.385756 containerd[1554]: time="2025-11-24T00:09:00.385746330Z" level=info msg="runtime interface created" Nov 24 00:09:00.385791 containerd[1554]: time="2025-11-24T00:09:00.385782190Z" level=info msg="created NRI interface" Nov 24 00:09:00.385828 containerd[1554]: time="2025-11-24T00:09:00.385817860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:09:00.385866 containerd[1554]: time="2025-11-24T00:09:00.385857200Z" level=info msg="Connect containerd service" Nov 24 00:09:00.385912 containerd[1554]: time="2025-11-24T00:09:00.385902750Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:09:00.389293 containerd[1554]: time="2025-11-24T00:09:00.388919570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:09:00.451840 systemd-networkd[1430]: eth0: DHCPv4 address 172.237.134.153/24, gateway 172.237.134.1 acquired from 23.205.167.181 Nov 24 00:09:00.453247 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Nov 24 00:09:00.454625 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1430 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 00:09:00.463581 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 00:09:00.531221 containerd[1554]: time="2025-11-24T00:09:00.531177000Z" level=info msg="Start subscribing containerd event" Nov 24 00:09:00.531482 containerd[1554]: time="2025-11-24T00:09:00.531429540Z" level=info msg="Start recovering state" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532066760Z" level=info msg="Start event monitor" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532524840Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532533960Z" level=info msg="Start streaming server" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532543190Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532550210Z" level=info msg="runtime interface starting up..." Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.532555950Z" level=info msg="starting plugins..." Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.533650920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:09:00.534483 containerd[1554]: time="2025-11-24T00:09:00.534073750Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:09:00.534715 containerd[1554]: time="2025-11-24T00:09:00.534606890Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:09:00.535399 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:09:00.536234 containerd[1554]: time="2025-11-24T00:09:00.536207580Z" level=info msg="containerd successfully booted in 0.185102s" Nov 24 00:09:00.555503 tar[1539]: linux-amd64/README.md Nov 24 00:09:00.575856 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:09:00.590677 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 00:09:00.591872 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 00:09:00.593202 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1611 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 00:09:00.600784 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 00:09:00.670389 polkitd[1621]: Started polkitd version 126 Nov 24 00:09:00.674174 polkitd[1621]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 00:09:00.674425 polkitd[1621]: Loading rules from directory /run/polkit-1/rules.d Nov 24 00:09:00.674492 polkitd[1621]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:09:00.674696 polkitd[1621]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 00:09:00.674722 polkitd[1621]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:09:00.674756 polkitd[1621]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 00:09:00.675652 polkitd[1621]: Finished loading, compiling and executing 2 rules Nov 24 00:09:00.675868 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 00:09:00.676182 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 00:09:00.678134 polkitd[1621]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 00:09:00.688058 systemd-hostnamed[1611]: Hostname set to <172-237-134-153> (transient) Nov 24 00:09:00.688163 systemd-resolved[1432]: System hostname changed to '172-237-134-153'. Nov 24 00:09:00.747689 systemd-timesyncd[1454]: Contacted time server 74.6.168.72:123 (0.flatcar.pool.ntp.org). Nov 24 00:09:00.747963 systemd-timesyncd[1454]: Initial clock synchronization to Mon 2025-11-24 00:09:00.559601 UTC. Nov 24 00:09:00.833451 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:09:00.856351 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:09:00.859446 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:09:00.874909 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:09:00.875196 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:09:00.877931 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:09:00.899428 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:09:00.902522 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:09:00.905760 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:09:00.906683 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:09:00.981266 coreos-metadata[1513]: Nov 24 00:09:00.981 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 24 00:09:01.072081 coreos-metadata[1513]: Nov 24 00:09:01.072 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 24 00:09:01.260326 coreos-metadata[1513]: Nov 24 00:09:01.260 INFO Fetch successful Nov 24 00:09:01.260326 coreos-metadata[1513]: Nov 24 00:09:01.260 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 24 00:09:01.336567 coreos-metadata[1584]: Nov 24 00:09:01.336 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 24 00:09:01.422653 coreos-metadata[1584]: Nov 24 00:09:01.422 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 24 00:09:01.512346 coreos-metadata[1513]: Nov 24 00:09:01.512 INFO Fetch successful Nov 24 00:09:01.555227 coreos-metadata[1584]: Nov 24 00:09:01.555 INFO Fetch successful Nov 24 00:09:01.571625 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:09:01.572848 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 00:09:01.579934 systemd[1]: Finished sshkeys.service. Nov 24 00:09:01.619058 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:09:01.620428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:09:01.633577 systemd-networkd[1430]: eth0: Gained IPv6LL Nov 24 00:09:01.635785 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:09:01.636928 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:09:01.639210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:01.641765 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:09:01.665519 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:09:02.502239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:02.503612 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:09:02.506589 systemd[1]: Startup finished in 2.845s (kernel) + 8.320s (initrd) + 5.593s (userspace) = 16.760s. Nov 24 00:09:02.509984 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:09:03.008441 kubelet[1690]: E1124 00:09:03.008382 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:09:03.011700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:09:03.011889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:09:03.012237 systemd[1]: kubelet.service: Consumed 883ms CPU time, 266.2M memory peak. Nov 24 00:09:03.624620 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:09:03.625944 systemd[1]: Started sshd@0-172.237.134.153:22-147.75.109.163:59370.service - OpenSSH per-connection server daemon (147.75.109.163:59370). Nov 24 00:09:03.972152 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 59370 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:03.978668 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:03.985564 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:09:03.986905 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:09:03.995563 systemd-logind[1530]: New session 1 of user core. Nov 24 00:09:04.006928 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:09:04.010186 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:09:04.022999 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:09:04.025734 systemd-logind[1530]: New session c1 of user core. Nov 24 00:09:04.174376 systemd[1708]: Queued start job for default target default.target. Nov 24 00:09:04.186788 systemd[1708]: Created slice app.slice - User Application Slice. Nov 24 00:09:04.186817 systemd[1708]: Reached target paths.target - Paths. Nov 24 00:09:04.186862 systemd[1708]: Reached target timers.target - Timers. Nov 24 00:09:04.188327 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:09:04.200051 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:09:04.200110 systemd[1708]: Reached target sockets.target - Sockets. Nov 24 00:09:04.200152 systemd[1708]: Reached target basic.target - Basic System. Nov 24 00:09:04.200221 systemd[1708]: Reached target default.target - Main User Target. Nov 24 00:09:04.200259 systemd[1708]: Startup finished in 167ms. Nov 24 00:09:04.200549 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:09:04.209580 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:09:04.473614 systemd[1]: Started sshd@1-172.237.134.153:22-147.75.109.163:59374.service - OpenSSH per-connection server daemon (147.75.109.163:59374). Nov 24 00:09:04.823061 sshd[1719]: Accepted publickey for core from 147.75.109.163 port 59374 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:04.824968 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:04.830805 systemd-logind[1530]: New session 2 of user core. Nov 24 00:09:04.835764 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:09:05.067483 sshd[1722]: Connection closed by 147.75.109.163 port 59374 Nov 24 00:09:05.068103 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:05.072856 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:09:05.073680 systemd[1]: sshd@1-172.237.134.153:22-147.75.109.163:59374.service: Deactivated successfully. Nov 24 00:09:05.075806 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:09:05.077827 systemd-logind[1530]: Removed session 2. Nov 24 00:09:05.128102 systemd[1]: Started sshd@2-172.237.134.153:22-147.75.109.163:59388.service - OpenSSH per-connection server daemon (147.75.109.163:59388). Nov 24 00:09:05.455157 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 59388 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:05.456985 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:05.463223 systemd-logind[1530]: New session 3 of user core. Nov 24 00:09:05.467632 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:09:05.685803 sshd[1731]: Connection closed by 147.75.109.163 port 59388 Nov 24 00:09:05.686654 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:05.692677 systemd[1]: sshd@2-172.237.134.153:22-147.75.109.163:59388.service: Deactivated successfully. Nov 24 00:09:05.696029 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:09:05.697206 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:09:05.699399 systemd-logind[1530]: Removed session 3. Nov 24 00:09:05.748299 systemd[1]: Started sshd@3-172.237.134.153:22-147.75.109.163:59400.service - OpenSSH per-connection server daemon (147.75.109.163:59400). Nov 24 00:09:06.081773 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 59400 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:06.083347 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:06.088364 systemd-logind[1530]: New session 4 of user core. Nov 24 00:09:06.094566 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:09:06.323641 sshd[1740]: Connection closed by 147.75.109.163 port 59400 Nov 24 00:09:06.324191 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:06.328254 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:09:06.328880 systemd[1]: sshd@3-172.237.134.153:22-147.75.109.163:59400.service: Deactivated successfully. Nov 24 00:09:06.330793 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:09:06.332315 systemd-logind[1530]: Removed session 4. Nov 24 00:09:06.379377 systemd[1]: Started sshd@4-172.237.134.153:22-147.75.109.163:59406.service - OpenSSH per-connection server daemon (147.75.109.163:59406). Nov 24 00:09:06.712671 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 59406 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:06.714186 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:06.719668 systemd-logind[1530]: New session 5 of user core. Nov 24 00:09:06.726590 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:09:06.916098 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:09:06.916497 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:09:06.933641 sudo[1750]: pam_unix(sudo:session): session closed for user root Nov 24 00:09:06.983149 sshd[1749]: Connection closed by 147.75.109.163 port 59406 Nov 24 00:09:06.984396 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:06.989596 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:09:06.989845 systemd[1]: sshd@4-172.237.134.153:22-147.75.109.163:59406.service: Deactivated successfully. Nov 24 00:09:06.992002 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:09:06.993354 systemd-logind[1530]: Removed session 5. Nov 24 00:09:07.038098 systemd[1]: Started sshd@5-172.237.134.153:22-147.75.109.163:59414.service - OpenSSH per-connection server daemon (147.75.109.163:59414). Nov 24 00:09:07.370391 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 59414 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:07.372006 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:07.378574 systemd-logind[1530]: New session 6 of user core. Nov 24 00:09:07.384611 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:09:07.557763 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:09:07.558115 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:09:07.568484 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 24 00:09:07.575587 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:09:07.575905 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:09:07.586347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:09:07.626123 augenrules[1783]: No rules Nov 24 00:09:07.627058 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:09:07.627588 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:09:07.629512 sudo[1760]: pam_unix(sudo:session): session closed for user root Nov 24 00:09:07.676826 sshd[1759]: Connection closed by 147.75.109.163 port 59414 Nov 24 00:09:07.677240 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:07.681899 systemd[1]: sshd@5-172.237.134.153:22-147.75.109.163:59414.service: Deactivated successfully. Nov 24 00:09:07.684609 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:09:07.687255 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:09:07.689003 systemd-logind[1530]: Removed session 6. Nov 24 00:09:07.746936 systemd[1]: Started sshd@6-172.237.134.153:22-147.75.109.163:59422.service - OpenSSH per-connection server daemon (147.75.109.163:59422). Nov 24 00:09:08.071339 sshd[1792]: Accepted publickey for core from 147.75.109.163 port 59422 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:09:08.073448 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:08.080515 systemd-logind[1530]: New session 7 of user core. Nov 24 00:09:08.085612 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:09:08.262097 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:09:08.262537 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:09:08.550895 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:09:08.569825 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:09:08.772254 dockerd[1814]: time="2025-11-24T00:09:08.772189297Z" level=info msg="Starting up" Nov 24 00:09:08.772970 dockerd[1814]: time="2025-11-24T00:09:08.772933873Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:09:08.785496 dockerd[1814]: time="2025-11-24T00:09:08.785427772Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:09:08.813333 systemd[1]: var-lib-docker-metacopy\x2dcheck2157448536-merged.mount: Deactivated successfully. Nov 24 00:09:08.834334 dockerd[1814]: time="2025-11-24T00:09:08.834293108Z" level=info msg="Loading containers: start." Nov 24 00:09:08.845496 kernel: Initializing XFRM netlink socket Nov 24 00:09:09.105564 systemd-networkd[1430]: docker0: Link UP Nov 24 00:09:09.108918 dockerd[1814]: time="2025-11-24T00:09:09.108409557Z" level=info msg="Loading containers: done." Nov 24 00:09:09.123492 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4127881137-merged.mount: Deactivated successfully. Nov 24 00:09:09.126877 dockerd[1814]: time="2025-11-24T00:09:09.126844868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:09:09.126995 dockerd[1814]: time="2025-11-24T00:09:09.126905723Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:09:09.126995 dockerd[1814]: time="2025-11-24T00:09:09.126985821Z" level=info msg="Initializing buildkit" Nov 24 00:09:09.146701 dockerd[1814]: time="2025-11-24T00:09:09.146676612Z" level=info msg="Completed buildkit initialization" Nov 24 00:09:09.153582 dockerd[1814]: time="2025-11-24T00:09:09.153546781Z" level=info msg="Daemon has completed initialization" Nov 24 00:09:09.153881 dockerd[1814]: time="2025-11-24T00:09:09.153854377Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:09:09.154055 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:09:09.978101 containerd[1554]: time="2025-11-24T00:09:09.978064683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:09:10.738041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977701097.mount: Deactivated successfully. Nov 24 00:09:11.789648 containerd[1554]: time="2025-11-24T00:09:11.789587434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:11.790664 containerd[1554]: time="2025-11-24T00:09:11.790475629Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 00:09:11.791173 containerd[1554]: time="2025-11-24T00:09:11.791148415Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:11.793104 containerd[1554]: time="2025-11-24T00:09:11.793084015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:11.795317 containerd[1554]: time="2025-11-24T00:09:11.795285780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 1.817188679s" Nov 24 00:09:11.795367 containerd[1554]: time="2025-11-24T00:09:11.795321925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:09:11.796671 containerd[1554]: time="2025-11-24T00:09:11.796651485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:09:13.194747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:09:13.197722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:13.219945 containerd[1554]: time="2025-11-24T00:09:13.219904561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:13.221017 containerd[1554]: time="2025-11-24T00:09:13.220992916Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 00:09:13.221790 containerd[1554]: time="2025-11-24T00:09:13.221762685Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:13.224106 containerd[1554]: time="2025-11-24T00:09:13.224070003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:13.225922 containerd[1554]: time="2025-11-24T00:09:13.225823136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.429146201s" Nov 24 00:09:13.225922 containerd[1554]: time="2025-11-24T00:09:13.225848585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:09:13.226964 containerd[1554]: time="2025-11-24T00:09:13.226921911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:09:13.401428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:13.413801 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:09:13.452412 kubelet[2094]: E1124 00:09:13.452245 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:09:13.457867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:09:13.458091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:09:13.459149 systemd[1]: kubelet.service: Consumed 202ms CPU time, 110.9M memory peak. Nov 24 00:09:14.428037 containerd[1554]: time="2025-11-24T00:09:14.427972042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:14.429335 containerd[1554]: time="2025-11-24T00:09:14.429113585Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 00:09:14.429890 containerd[1554]: time="2025-11-24T00:09:14.429863417Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:14.432020 containerd[1554]: time="2025-11-24T00:09:14.431995073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:14.433113 containerd[1554]: time="2025-11-24T00:09:14.433089302Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.205973767s" Nov 24 00:09:14.433177 containerd[1554]: time="2025-11-24T00:09:14.433164320Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:09:14.433996 containerd[1554]: time="2025-11-24T00:09:14.433945681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:09:15.531923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567458718.mount: Deactivated successfully. Nov 24 00:09:15.875254 containerd[1554]: time="2025-11-24T00:09:15.874989470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:15.875991 containerd[1554]: time="2025-11-24T00:09:15.875682270Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 00:09:15.876321 containerd[1554]: time="2025-11-24T00:09:15.876289382Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:15.877518 containerd[1554]: time="2025-11-24T00:09:15.877481696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:15.878298 containerd[1554]: time="2025-11-24T00:09:15.877995796Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 1.444024797s" Nov 24 00:09:15.878298 containerd[1554]: time="2025-11-24T00:09:15.878028328Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:09:15.878581 containerd[1554]: time="2025-11-24T00:09:15.878554953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:09:16.479331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309305796.mount: Deactivated successfully. Nov 24 00:09:17.141795 containerd[1554]: time="2025-11-24T00:09:17.141741796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:17.142662 containerd[1554]: time="2025-11-24T00:09:17.142601414Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 00:09:17.143331 containerd[1554]: time="2025-11-24T00:09:17.143308009Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:17.145269 containerd[1554]: time="2025-11-24T00:09:17.145236194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:17.146360 containerd[1554]: time="2025-11-24T00:09:17.146083547Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.267499802s" Nov 24 00:09:17.146360 containerd[1554]: time="2025-11-24T00:09:17.146110811Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:09:17.146762 containerd[1554]: time="2025-11-24T00:09:17.146733489Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:09:17.745503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346027051.mount: Deactivated successfully. Nov 24 00:09:17.749836 containerd[1554]: time="2025-11-24T00:09:17.749766331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:09:17.750536 containerd[1554]: time="2025-11-24T00:09:17.750498195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:09:17.751102 containerd[1554]: time="2025-11-24T00:09:17.751034722Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:09:17.752643 containerd[1554]: time="2025-11-24T00:09:17.752602042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:09:17.753367 containerd[1554]: time="2025-11-24T00:09:17.753186037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.336342ms" Nov 24 00:09:17.753367 containerd[1554]: time="2025-11-24T00:09:17.753212603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:09:17.754048 containerd[1554]: time="2025-11-24T00:09:17.754025302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:09:18.381665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621550278.mount: Deactivated successfully. Nov 24 00:09:19.800868 containerd[1554]: time="2025-11-24T00:09:19.800802495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:19.801857 containerd[1554]: time="2025-11-24T00:09:19.801613379Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 00:09:19.802327 containerd[1554]: time="2025-11-24T00:09:19.802301444Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:19.804515 containerd[1554]: time="2025-11-24T00:09:19.804491393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:19.805434 containerd[1554]: time="2025-11-24T00:09:19.805410736Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.051361397s" Nov 24 00:09:19.805530 containerd[1554]: time="2025-11-24T00:09:19.805513577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:09:22.281115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:22.281259 systemd[1]: kubelet.service: Consumed 202ms CPU time, 110.9M memory peak. Nov 24 00:09:22.283455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:22.316254 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-7.scope)... Nov 24 00:09:22.316271 systemd[1]: Reloading... Nov 24 00:09:22.427503 zram_generator::config[2293]: No configuration found. Nov 24 00:09:22.656832 systemd[1]: Reloading finished in 340 ms. Nov 24 00:09:22.724086 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:09:22.724196 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:09:22.724514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:22.724569 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.3M memory peak. Nov 24 00:09:22.726220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:22.910908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:22.920812 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:09:22.966869 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:09:22.966869 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:09:22.966869 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:09:22.967216 kubelet[2351]: I1124 00:09:22.966933 2351 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:09:23.655762 kubelet[2351]: I1124 00:09:23.655714 2351 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:09:23.655762 kubelet[2351]: I1124 00:09:23.655742 2351 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:09:23.655968 kubelet[2351]: I1124 00:09:23.655951 2351 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:09:23.683440 kubelet[2351]: E1124 00:09:23.683001 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.237.134.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:09:23.683440 kubelet[2351]: I1124 00:09:23.683170 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:09:23.695168 kubelet[2351]: I1124 00:09:23.695138 2351 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:09:23.699472 kubelet[2351]: I1124 00:09:23.699445 2351 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:09:23.699732 kubelet[2351]: I1124 00:09:23.699692 2351 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:09:23.699878 kubelet[2351]: I1124 00:09:23.699724 2351 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-134-153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:09:23.699878 kubelet[2351]: I1124 00:09:23.699875 2351 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:09:23.700024 kubelet[2351]: I1124 00:09:23.699884 2351 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:09:23.700024 kubelet[2351]: I1124 00:09:23.700000 2351 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:09:23.703127 kubelet[2351]: I1124 00:09:23.702804 2351 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:09:23.703127 kubelet[2351]: I1124 00:09:23.702826 2351 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:09:23.703127 kubelet[2351]: I1124 00:09:23.702850 2351 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:09:23.703127 kubelet[2351]: I1124 00:09:23.702866 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:09:23.709873 kubelet[2351]: E1124 00:09:23.709845 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.134.153:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-134-153&limit=500&resourceVersion=0\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:09:23.710195 kubelet[2351]: I1124 00:09:23.710152 2351 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:09:23.710623 kubelet[2351]: I1124 00:09:23.710596 2351 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:09:23.711578 kubelet[2351]: W1124 00:09:23.711544 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:09:23.715061 kubelet[2351]: I1124 00:09:23.714945 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:09:23.715061 kubelet[2351]: I1124 00:09:23.714990 2351 server.go:1289] "Started kubelet" Nov 24 00:09:23.716118 kubelet[2351]: E1124 00:09:23.715690 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.134.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:09:23.716381 kubelet[2351]: I1124 00:09:23.716337 2351 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:09:23.718670 kubelet[2351]: I1124 00:09:23.717754 2351 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:09:23.718670 kubelet[2351]: I1124 00:09:23.718095 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:09:23.718670 kubelet[2351]: I1124 00:09:23.718387 2351 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:09:23.719885 kubelet[2351]: E1124 00:09:23.718537 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.134.153:6443/api/v1/namespaces/default/events\": dial tcp 172.237.134.153:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-134-153.187ac8c79c47b191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-134-153,UID:172-237-134-153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-134-153,},FirstTimestamp:2025-11-24 00:09:23.714961809 +0000 UTC m=+0.789720781,LastTimestamp:2025-11-24 00:09:23.714961809 +0000 UTC m=+0.789720781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-134-153,}" Nov 24 00:09:23.721591 kubelet[2351]: E1124 00:09:23.721574 2351 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:09:23.721734 kubelet[2351]: I1124 00:09:23.721693 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:09:23.721905 kubelet[2351]: I1124 00:09:23.721891 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:09:23.724621 kubelet[2351]: E1124 00:09:23.724599 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-134-153\" not found" Nov 24 00:09:23.724688 kubelet[2351]: I1124 00:09:23.724679 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:09:23.724908 kubelet[2351]: I1124 00:09:23.724894 2351 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:09:23.725367 kubelet[2351]: I1124 00:09:23.725355 2351 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:09:23.725777 kubelet[2351]: E1124 00:09:23.725759 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.134.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:09:23.726728 kubelet[2351]: I1124 00:09:23.726708 2351 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:09:23.727578 kubelet[2351]: E1124 00:09:23.727556 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-153?timeout=10s\": dial tcp 172.237.134.153:6443: connect: connection refused" interval="200ms" Nov 24 00:09:23.728582 kubelet[2351]: I1124 00:09:23.728568 2351 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:09:23.728650 kubelet[2351]: I1124 00:09:23.728641 2351 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:09:23.751318 kubelet[2351]: I1124 00:09:23.751303 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:09:23.751398 kubelet[2351]: I1124 00:09:23.751388 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:09:23.751481 kubelet[2351]: I1124 00:09:23.751456 2351 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:09:23.753005 kubelet[2351]: I1124 00:09:23.752992 2351 policy_none.go:49] "None policy: Start" Nov 24 00:09:23.753181 kubelet[2351]: I1124 00:09:23.753171 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:09:23.753243 kubelet[2351]: I1124 00:09:23.753235 2351 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:09:23.758905 kubelet[2351]: I1124 00:09:23.758887 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:09:23.759080 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:09:23.760502 kubelet[2351]: I1124 00:09:23.760264 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:09:23.760502 kubelet[2351]: I1124 00:09:23.760279 2351 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:09:23.760502 kubelet[2351]: I1124 00:09:23.760295 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:09:23.760502 kubelet[2351]: I1124 00:09:23.760301 2351 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:09:23.760502 kubelet[2351]: E1124 00:09:23.760341 2351 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:09:23.765363 kubelet[2351]: E1124 00:09:23.765341 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.134.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:09:23.773949 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:09:23.778293 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:09:23.791548 kubelet[2351]: E1124 00:09:23.791533 2351 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:09:23.792069 kubelet[2351]: I1124 00:09:23.792057 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:09:23.792140 kubelet[2351]: I1124 00:09:23.792114 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:09:23.792360 kubelet[2351]: I1124 00:09:23.792348 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:09:23.793776 kubelet[2351]: E1124 00:09:23.793758 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:09:23.793874 kubelet[2351]: E1124 00:09:23.793862 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-134-153\" not found" Nov 24 00:09:23.871487 systemd[1]: Created slice kubepods-burstable-podbb494a28f03642571dbd68855402c11a.slice - libcontainer container kubepods-burstable-podbb494a28f03642571dbd68855402c11a.slice. Nov 24 00:09:23.883671 kubelet[2351]: E1124 00:09:23.883639 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:23.887281 systemd[1]: Created slice kubepods-burstable-podb55a2e54d47d8bbc89f03d7690a517e8.slice - libcontainer container kubepods-burstable-podb55a2e54d47d8bbc89f03d7690a517e8.slice. Nov 24 00:09:23.889645 kubelet[2351]: E1124 00:09:23.889615 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:23.893001 systemd[1]: Created slice kubepods-burstable-pod25de9067c6cc9e66ffd5ecc70e8ace21.slice - libcontainer container kubepods-burstable-pod25de9067c6cc9e66ffd5ecc70e8ace21.slice. Nov 24 00:09:23.894144 kubelet[2351]: I1124 00:09:23.894118 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-153" Nov 24 00:09:23.894498 kubelet[2351]: E1124 00:09:23.894433 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.153:6443/api/v1/nodes\": dial tcp 172.237.134.153:6443: connect: connection refused" node="172-237-134-153" Nov 24 00:09:23.895063 kubelet[2351]: E1124 00:09:23.895036 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:23.926369 kubelet[2351]: I1124 00:09:23.926256 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-flexvolume-dir\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:23.926369 kubelet[2351]: I1124 00:09:23.926283 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-k8s-certs\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:23.926369 kubelet[2351]: I1124 00:09:23.926302 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:23.926369 kubelet[2351]: I1124 00:09:23.926320 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b55a2e54d47d8bbc89f03d7690a517e8-kubeconfig\") pod \"kube-scheduler-172-237-134-153\" (UID: \"b55a2e54d47d8bbc89f03d7690a517e8\") " pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:23.926369 kubelet[2351]: I1124 00:09:23.926334 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-ca-certs\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:23.926527 kubelet[2351]: I1124 00:09:23.926348 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-k8s-certs\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:23.926527 kubelet[2351]: I1124 00:09:23.926361 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-ca-certs\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:23.926527 kubelet[2351]: I1124 00:09:23.926378 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-kubeconfig\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:23.926527 kubelet[2351]: I1124 00:09:23.926393 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:23.928639 kubelet[2351]: E1124 00:09:23.928610 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-153?timeout=10s\": dial tcp 172.237.134.153:6443: connect: connection refused" interval="400ms" Nov 24 00:09:24.096200 kubelet[2351]: I1124 00:09:24.096151 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-153" Nov 24 00:09:24.096756 kubelet[2351]: E1124 00:09:24.096439 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.153:6443/api/v1/nodes\": dial tcp 172.237.134.153:6443: connect: connection refused" node="172-237-134-153" Nov 24 00:09:24.184825 kubelet[2351]: E1124 00:09:24.184714 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.186185 containerd[1554]: time="2025-11-24T00:09:24.185364946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-134-153,Uid:bb494a28f03642571dbd68855402c11a,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:24.190998 kubelet[2351]: E1124 00:09:24.190840 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.191976 containerd[1554]: time="2025-11-24T00:09:24.191945060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-134-153,Uid:b55a2e54d47d8bbc89f03d7690a517e8,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:24.196891 kubelet[2351]: E1124 00:09:24.196871 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.198399 containerd[1554]: time="2025-11-24T00:09:24.198253570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-134-153,Uid:25de9067c6cc9e66ffd5ecc70e8ace21,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:24.218713 containerd[1554]: time="2025-11-24T00:09:24.218659634Z" level=info msg="connecting to shim e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b" address="unix:///run/containerd/s/49295de68d369e1e5eb9f9ff19eb091ff08d1c896a31e4efb8d74ce3d2b11fd9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:24.234968 containerd[1554]: time="2025-11-24T00:09:24.234933815Z" level=info msg="connecting to shim 3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca" address="unix:///run/containerd/s/e2083c7ac2c4943b0759e3b874ad751fd8a933c377f3ca57a2ea3e59fbfe2c92" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:24.245884 containerd[1554]: time="2025-11-24T00:09:24.245809223Z" level=info msg="connecting to shim 66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92" address="unix:///run/containerd/s/fb4b3568fdc8d067039833998033fb60114f6128b37c53b8cda41eacca6a4cb5" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:24.268585 systemd[1]: Started cri-containerd-e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b.scope - libcontainer container e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b. Nov 24 00:09:24.285573 systemd[1]: Started cri-containerd-3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca.scope - libcontainer container 3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca. Nov 24 00:09:24.287517 systemd[1]: Started cri-containerd-66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92.scope - libcontainer container 66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92. Nov 24 00:09:24.330231 kubelet[2351]: E1124 00:09:24.330177 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.134.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-134-153?timeout=10s\": dial tcp 172.237.134.153:6443: connect: connection refused" interval="800ms" Nov 24 00:09:24.352574 containerd[1554]: time="2025-11-24T00:09:24.352523856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-134-153,Uid:b55a2e54d47d8bbc89f03d7690a517e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b\"" Nov 24 00:09:24.356080 kubelet[2351]: E1124 00:09:24.356050 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.361831 containerd[1554]: time="2025-11-24T00:09:24.361798890Z" level=info msg="CreateContainer within sandbox \"e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:09:24.362941 containerd[1554]: time="2025-11-24T00:09:24.362918089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-134-153,Uid:25de9067c6cc9e66ffd5ecc70e8ace21,Namespace:kube-system,Attempt:0,} returns sandbox id \"66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92\"" Nov 24 00:09:24.364058 kubelet[2351]: E1124 00:09:24.363978 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.368367 containerd[1554]: time="2025-11-24T00:09:24.368340715Z" level=info msg="CreateContainer within sandbox \"66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:09:24.371482 containerd[1554]: time="2025-11-24T00:09:24.371116468Z" level=info msg="Container 5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:24.381766 containerd[1554]: time="2025-11-24T00:09:24.381735756Z" level=info msg="CreateContainer within sandbox \"e8b1a93a5518124d82e2d0e85f4634853b7cb0422007271efb4640f0b29d324b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc\"" Nov 24 00:09:24.382663 containerd[1554]: time="2025-11-24T00:09:24.382642837Z" level=info msg="StartContainer for \"5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc\"" Nov 24 00:09:24.384323 containerd[1554]: time="2025-11-24T00:09:24.384290410Z" level=info msg="connecting to shim 5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc" address="unix:///run/containerd/s/49295de68d369e1e5eb9f9ff19eb091ff08d1c896a31e4efb8d74ce3d2b11fd9" protocol=ttrpc version=3 Nov 24 00:09:24.384671 containerd[1554]: time="2025-11-24T00:09:24.384653903Z" level=info msg="Container 7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:24.389210 containerd[1554]: time="2025-11-24T00:09:24.389190975Z" level=info msg="CreateContainer within sandbox \"66732086d7d95cfa1286092d07d50df803b14385fd539cf318735e8042cd4a92\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac\"" Nov 24 00:09:24.389605 containerd[1554]: time="2025-11-24T00:09:24.389545119Z" level=info msg="StartContainer for \"7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac\"" Nov 24 00:09:24.390865 containerd[1554]: time="2025-11-24T00:09:24.390844541Z" level=info msg="connecting to shim 7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac" address="unix:///run/containerd/s/fb4b3568fdc8d067039833998033fb60114f6128b37c53b8cda41eacca6a4cb5" protocol=ttrpc version=3 Nov 24 00:09:24.399858 containerd[1554]: time="2025-11-24T00:09:24.399822190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-134-153,Uid:bb494a28f03642571dbd68855402c11a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca\"" Nov 24 00:09:24.400901 kubelet[2351]: E1124 00:09:24.400873 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.405792 containerd[1554]: time="2025-11-24T00:09:24.405772940Z" level=info msg="CreateContainer within sandbox \"3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:09:24.407601 systemd[1]: Started cri-containerd-5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc.scope - libcontainer container 5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc. Nov 24 00:09:24.419821 containerd[1554]: time="2025-11-24T00:09:24.419800680Z" level=info msg="Container f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:24.425902 containerd[1554]: time="2025-11-24T00:09:24.425870131Z" level=info msg="CreateContainer within sandbox \"3886b8adb81bf57edbbbfa7a2ee91fa95d3f70361e9b5f36a144c3c0d72462ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4\"" Nov 24 00:09:24.426664 containerd[1554]: time="2025-11-24T00:09:24.426511321Z" level=info msg="StartContainer for \"f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4\"" Nov 24 00:09:24.428787 containerd[1554]: time="2025-11-24T00:09:24.428750000Z" level=info msg="connecting to shim f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4" address="unix:///run/containerd/s/e2083c7ac2c4943b0759e3b874ad751fd8a933c377f3ca57a2ea3e59fbfe2c92" protocol=ttrpc version=3 Nov 24 00:09:24.431601 systemd[1]: Started cri-containerd-7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac.scope - libcontainer container 7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac. Nov 24 00:09:24.450698 systemd[1]: Started cri-containerd-f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4.scope - libcontainer container f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4. Nov 24 00:09:24.499645 kubelet[2351]: I1124 00:09:24.499373 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-153" Nov 24 00:09:24.500094 kubelet[2351]: E1124 00:09:24.500071 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.134.153:6443/api/v1/nodes\": dial tcp 172.237.134.153:6443: connect: connection refused" node="172-237-134-153" Nov 24 00:09:24.510365 containerd[1554]: time="2025-11-24T00:09:24.510318437Z" level=info msg="StartContainer for \"5c033999b27d30da2f4624509ad87f6a6b59367e8fa34300efeea5d00d5c15dc\" returns successfully" Nov 24 00:09:24.516580 containerd[1554]: time="2025-11-24T00:09:24.516364683Z" level=info msg="StartContainer for \"7bac5c5316be9e7a2b5263c54d958bff7b9febec8f4109df44e4190de5e3cfac\" returns successfully" Nov 24 00:09:24.529658 kubelet[2351]: E1124 00:09:24.529621 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.134.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.134.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:09:24.557655 containerd[1554]: time="2025-11-24T00:09:24.557607342Z" level=info msg="StartContainer for \"f863fc5f45438d359ff4b3c622b45165e29c896619450ebca93139f2e584c7a4\" returns successfully" Nov 24 00:09:24.775793 kubelet[2351]: E1124 00:09:24.775568 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:24.775793 kubelet[2351]: E1124 00:09:24.775678 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.779033 kubelet[2351]: E1124 00:09:24.778876 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:24.779033 kubelet[2351]: E1124 00:09:24.778960 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:24.781120 kubelet[2351]: E1124 00:09:24.781082 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:24.781442 kubelet[2351]: E1124 00:09:24.781389 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:25.303940 kubelet[2351]: I1124 00:09:25.303217 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-153" Nov 24 00:09:25.787490 kubelet[2351]: E1124 00:09:25.786749 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:25.787490 kubelet[2351]: E1124 00:09:25.786891 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:25.788960 kubelet[2351]: E1124 00:09:25.788945 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:25.789132 kubelet[2351]: E1124 00:09:25.789119 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:25.789439 kubelet[2351]: E1124 00:09:25.789425 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:25.789643 kubelet[2351]: E1124 00:09:25.789629 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:26.262331 kubelet[2351]: E1124 00:09:26.262290 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-134-153\" not found" node="172-237-134-153" Nov 24 00:09:26.314351 kubelet[2351]: I1124 00:09:26.314299 2351 kubelet_node_status.go:78] "Successfully registered node" node="172-237-134-153" Nov 24 00:09:26.314351 kubelet[2351]: E1124 00:09:26.314325 2351 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-134-153\": node \"172-237-134-153\" not found" Nov 24 00:09:26.327339 kubelet[2351]: I1124 00:09:26.327151 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:26.333520 kubelet[2351]: E1124 00:09:26.333504 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-134-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:26.333605 kubelet[2351]: I1124 00:09:26.333595 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:26.335176 kubelet[2351]: E1124 00:09:26.335159 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-134-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:26.335313 kubelet[2351]: I1124 00:09:26.335252 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:26.336654 kubelet[2351]: E1124 00:09:26.336624 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-134-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:26.717305 kubelet[2351]: I1124 00:09:26.717277 2351 apiserver.go:52] "Watching apiserver" Nov 24 00:09:26.726052 kubelet[2351]: I1124 00:09:26.726027 2351 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:09:26.782724 kubelet[2351]: I1124 00:09:26.782706 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:26.784066 kubelet[2351]: E1124 00:09:26.784047 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-134-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:26.784182 kubelet[2351]: E1124 00:09:26.784168 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:28.361147 systemd[1]: Reload requested from client PID 2632 ('systemctl') (unit session-7.scope)... Nov 24 00:09:28.361166 systemd[1]: Reloading... Nov 24 00:09:28.465803 zram_generator::config[2682]: No configuration found. Nov 24 00:09:28.683039 systemd[1]: Reloading finished in 321 ms. Nov 24 00:09:28.715276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:28.737860 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:09:28.738161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:28.738225 systemd[1]: kubelet.service: Consumed 1.174s CPU time, 130.9M memory peak. Nov 24 00:09:28.740383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:09:28.920813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:09:28.932812 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:09:28.978755 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:09:28.978755 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:09:28.978755 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:09:28.980342 kubelet[2727]: I1124 00:09:28.979300 2727 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:09:28.987274 kubelet[2727]: I1124 00:09:28.987256 2727 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:09:28.987357 kubelet[2727]: I1124 00:09:28.987347 2727 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:09:28.987599 kubelet[2727]: I1124 00:09:28.987585 2727 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:09:28.988614 kubelet[2727]: I1124 00:09:28.988599 2727 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:09:28.992656 kubelet[2727]: I1124 00:09:28.992643 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:09:28.995362 kubelet[2727]: I1124 00:09:28.995348 2727 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:09:28.999723 kubelet[2727]: I1124 00:09:28.999708 2727 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:09:29.000003 kubelet[2727]: I1124 00:09:28.999983 2727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:09:29.000235 kubelet[2727]: I1124 00:09:29.000060 2727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-134-153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:09:29.000336 kubelet[2727]: I1124 00:09:29.000326 2727 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:09:29.000384 kubelet[2727]: I1124 00:09:29.000376 2727 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:09:29.000482 kubelet[2727]: I1124 00:09:29.000454 2727 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:09:29.000687 kubelet[2727]: I1124 00:09:29.000676 2727 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:09:29.000760 kubelet[2727]: I1124 00:09:29.000750 2727 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:09:29.000827 kubelet[2727]: I1124 00:09:29.000819 2727 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:09:29.000881 kubelet[2727]: I1124 00:09:29.000873 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:09:29.003955 kubelet[2727]: I1124 00:09:29.003940 2727 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:09:29.004373 kubelet[2727]: I1124 00:09:29.004360 2727 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:09:29.007364 kubelet[2727]: I1124 00:09:29.007351 2727 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:09:29.007440 kubelet[2727]: I1124 00:09:29.007431 2727 server.go:1289] "Started kubelet" Nov 24 00:09:29.009060 kubelet[2727]: I1124 00:09:29.009045 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:09:29.018056 kubelet[2727]: I1124 00:09:29.018037 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:09:29.018819 kubelet[2727]: I1124 00:09:29.018805 2727 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:09:29.020508 kubelet[2727]: I1124 00:09:29.020184 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:09:29.020667 kubelet[2727]: I1124 00:09:29.020641 2727 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:09:29.020810 kubelet[2727]: I1124 00:09:29.020786 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:09:29.023298 kubelet[2727]: I1124 00:09:29.022886 2727 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:09:29.023298 kubelet[2727]: E1124 00:09:29.023023 2727 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-134-153\" not found" Nov 24 00:09:29.026441 kubelet[2727]: I1124 00:09:29.026427 2727 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:09:29.026637 kubelet[2727]: I1124 00:09:29.026626 2727 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:09:29.030432 kubelet[2727]: I1124 00:09:29.030399 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:09:29.032820 kubelet[2727]: I1124 00:09:29.032806 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:09:29.032883 kubelet[2727]: I1124 00:09:29.032874 2727 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:09:29.032949 kubelet[2727]: I1124 00:09:29.032939 2727 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:09:29.032995 kubelet[2727]: I1124 00:09:29.032987 2727 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:09:29.033079 kubelet[2727]: E1124 00:09:29.033064 2727 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:09:29.034496 kubelet[2727]: I1124 00:09:29.034439 2727 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:09:29.035073 kubelet[2727]: I1124 00:09:29.035042 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:09:29.037214 kubelet[2727]: E1124 00:09:29.037046 2727 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:09:29.038090 kubelet[2727]: I1124 00:09:29.037938 2727 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:09:29.098968 kubelet[2727]: I1124 00:09:29.098938 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:09:29.098968 kubelet[2727]: I1124 00:09:29.098955 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:09:29.098968 kubelet[2727]: I1124 00:09:29.098971 2727 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:09:29.099126 kubelet[2727]: I1124 00:09:29.099083 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:09:29.099126 kubelet[2727]: I1124 00:09:29.099092 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:09:29.099126 kubelet[2727]: I1124 00:09:29.099107 2727 policy_none.go:49] "None policy: Start" Nov 24 00:09:29.099126 kubelet[2727]: I1124 00:09:29.099116 2727 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:09:29.099126 kubelet[2727]: I1124 00:09:29.099126 2727 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:09:29.099228 kubelet[2727]: I1124 00:09:29.099217 2727 state_mem.go:75] "Updated machine memory state" Nov 24 00:09:29.103791 kubelet[2727]: E1124 00:09:29.103773 2727 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:09:29.105082 kubelet[2727]: I1124 00:09:29.104752 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:09:29.105082 kubelet[2727]: I1124 00:09:29.104766 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:09:29.105233 kubelet[2727]: I1124 00:09:29.105222 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:09:29.106180 kubelet[2727]: E1124 00:09:29.106116 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:09:29.134499 kubelet[2727]: I1124 00:09:29.134370 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:29.134768 kubelet[2727]: I1124 00:09:29.134695 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:29.135108 kubelet[2727]: I1124 00:09:29.135078 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.207218 kubelet[2727]: I1124 00:09:29.207182 2727 kubelet_node_status.go:75] "Attempting to register node" node="172-237-134-153" Nov 24 00:09:29.218201 kubelet[2727]: I1124 00:09:29.218143 2727 kubelet_node_status.go:124] "Node was previously registered" node="172-237-134-153" Nov 24 00:09:29.218308 kubelet[2727]: I1124 00:09:29.218268 2727 kubelet_node_status.go:78] "Successfully registered node" node="172-237-134-153" Nov 24 00:09:29.329757 kubelet[2727]: I1124 00:09:29.328855 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-k8s-certs\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:29.329757 kubelet[2727]: I1124 00:09:29.328917 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-flexvolume-dir\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.329757 kubelet[2727]: I1124 00:09:29.328959 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-kubeconfig\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.329757 kubelet[2727]: I1124 00:09:29.328976 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-ca-certs\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:29.329757 kubelet[2727]: I1124 00:09:29.328994 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25de9067c6cc9e66ffd5ecc70e8ace21-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-134-153\" (UID: \"25de9067c6cc9e66ffd5ecc70e8ace21\") " pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:29.329960 kubelet[2727]: I1124 00:09:29.329012 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-ca-certs\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.329960 kubelet[2727]: I1124 00:09:29.329028 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-k8s-certs\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.329960 kubelet[2727]: I1124 00:09:29.329045 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb494a28f03642571dbd68855402c11a-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-134-153\" (UID: \"bb494a28f03642571dbd68855402c11a\") " pod="kube-system/kube-controller-manager-172-237-134-153" Nov 24 00:09:29.329960 kubelet[2727]: I1124 00:09:29.329061 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b55a2e54d47d8bbc89f03d7690a517e8-kubeconfig\") pod \"kube-scheduler-172-237-134-153\" (UID: \"b55a2e54d47d8bbc89f03d7690a517e8\") " pod="kube-system/kube-scheduler-172-237-134-153" Nov 24 00:09:29.442662 kubelet[2727]: E1124 00:09:29.442627 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:29.443319 kubelet[2727]: E1124 00:09:29.443299 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:29.443408 kubelet[2727]: E1124 00:09:29.443391 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:30.003505 kubelet[2727]: I1124 00:09:30.003449 2727 apiserver.go:52] "Watching apiserver" Nov 24 00:09:30.026811 kubelet[2727]: I1124 00:09:30.026777 2727 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:09:30.076813 kubelet[2727]: I1124 00:09:30.076778 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:30.078924 kubelet[2727]: E1124 00:09:30.077684 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:30.079920 kubelet[2727]: E1124 00:09:30.079885 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:30.084069 kubelet[2727]: E1124 00:09:30.084003 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-134-153\" already exists" pod="kube-system/kube-apiserver-172-237-134-153" Nov 24 00:09:30.084694 kubelet[2727]: E1124 00:09:30.084554 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:30.103446 kubelet[2727]: I1124 00:09:30.102910 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-134-153" podStartSLOduration=1.102900236 podStartE2EDuration="1.102900236s" podCreationTimestamp="2025-11-24 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:09:30.102684071 +0000 UTC m=+1.164554815" watchObservedRunningTime="2025-11-24 00:09:30.102900236 +0000 UTC m=+1.164770980" Nov 24 00:09:30.114294 kubelet[2727]: I1124 00:09:30.114246 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-134-153" podStartSLOduration=1.114235711 podStartE2EDuration="1.114235711s" podCreationTimestamp="2025-11-24 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:09:30.109009597 +0000 UTC m=+1.170880341" watchObservedRunningTime="2025-11-24 00:09:30.114235711 +0000 UTC m=+1.176106455" Nov 24 00:09:30.121453 kubelet[2727]: I1124 00:09:30.121338 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-134-153" podStartSLOduration=1.121327951 podStartE2EDuration="1.121327951s" podCreationTimestamp="2025-11-24 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:09:30.11442034 +0000 UTC m=+1.176291084" watchObservedRunningTime="2025-11-24 00:09:30.121327951 +0000 UTC m=+1.183198695" Nov 24 00:09:30.720031 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 00:09:31.078823 kubelet[2727]: E1124 00:09:31.078335 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:31.078823 kubelet[2727]: E1124 00:09:31.078610 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:32.079892 kubelet[2727]: E1124 00:09:32.079860 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:33.068932 kubelet[2727]: I1124 00:09:33.068855 2727 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:09:33.069254 containerd[1554]: time="2025-11-24T00:09:33.069219435Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:09:33.069843 kubelet[2727]: I1124 00:09:33.069371 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:09:33.897065 systemd[1]: Created slice kubepods-besteffort-podfb7416bc_668a_47f6_b66f_c6065bbe30bf.slice - libcontainer container kubepods-besteffort-podfb7416bc_668a_47f6_b66f_c6065bbe30bf.slice. Nov 24 00:09:33.955995 kubelet[2727]: I1124 00:09:33.955929 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb7416bc-668a-47f6-b66f-c6065bbe30bf-kube-proxy\") pod \"kube-proxy-blnsn\" (UID: \"fb7416bc-668a-47f6-b66f-c6065bbe30bf\") " pod="kube-system/kube-proxy-blnsn" Nov 24 00:09:33.955995 kubelet[2727]: I1124 00:09:33.955989 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb7416bc-668a-47f6-b66f-c6065bbe30bf-xtables-lock\") pod \"kube-proxy-blnsn\" (UID: \"fb7416bc-668a-47f6-b66f-c6065bbe30bf\") " pod="kube-system/kube-proxy-blnsn" Nov 24 00:09:33.956412 kubelet[2727]: I1124 00:09:33.956036 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb7416bc-668a-47f6-b66f-c6065bbe30bf-lib-modules\") pod \"kube-proxy-blnsn\" (UID: \"fb7416bc-668a-47f6-b66f-c6065bbe30bf\") " pod="kube-system/kube-proxy-blnsn" Nov 24 00:09:33.956412 kubelet[2727]: I1124 00:09:33.956068 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4lf\" (UniqueName: \"kubernetes.io/projected/fb7416bc-668a-47f6-b66f-c6065bbe30bf-kube-api-access-wg4lf\") pod \"kube-proxy-blnsn\" (UID: \"fb7416bc-668a-47f6-b66f-c6065bbe30bf\") " pod="kube-system/kube-proxy-blnsn" Nov 24 00:09:34.210821 kubelet[2727]: E1124 00:09:34.210741 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:34.211306 containerd[1554]: time="2025-11-24T00:09:34.211208456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blnsn,Uid:fb7416bc-668a-47f6-b66f-c6065bbe30bf,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:34.235349 containerd[1554]: time="2025-11-24T00:09:34.235266287Z" level=info msg="connecting to shim 7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05" address="unix:///run/containerd/s/20e79dd85907921746b19846c5549f6d1ff99a74eaa1342ec01f45e320667e35" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:34.267888 systemd[1]: Started cri-containerd-7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05.scope - libcontainer container 7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05. Nov 24 00:09:34.296041 systemd[1]: Created slice kubepods-besteffort-pod0c3b5570_6e8e_4dde_a8cc_68d3764a8e55.slice - libcontainer container kubepods-besteffort-pod0c3b5570_6e8e_4dde_a8cc_68d3764a8e55.slice. Nov 24 00:09:34.360425 containerd[1554]: time="2025-11-24T00:09:34.360312056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blnsn,Uid:fb7416bc-668a-47f6-b66f-c6065bbe30bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05\"" Nov 24 00:09:34.361627 kubelet[2727]: E1124 00:09:34.361594 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:34.365953 containerd[1554]: time="2025-11-24T00:09:34.365916242Z" level=info msg="CreateContainer within sandbox \"7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:09:34.377106 containerd[1554]: time="2025-11-24T00:09:34.376030742Z" level=info msg="Container 615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:34.383756 containerd[1554]: time="2025-11-24T00:09:34.383719589Z" level=info msg="CreateContainer within sandbox \"7c1e716a2054ed0f5b185aef23265ab8933937e3d1b52242dfe9181428d5dd05\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028\"" Nov 24 00:09:34.384405 containerd[1554]: time="2025-11-24T00:09:34.384347949Z" level=info msg="StartContainer for \"615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028\"" Nov 24 00:09:34.385744 containerd[1554]: time="2025-11-24T00:09:34.385692600Z" level=info msg="connecting to shim 615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028" address="unix:///run/containerd/s/20e79dd85907921746b19846c5549f6d1ff99a74eaa1342ec01f45e320667e35" protocol=ttrpc version=3 Nov 24 00:09:34.405585 systemd[1]: Started cri-containerd-615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028.scope - libcontainer container 615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028. Nov 24 00:09:34.460705 kubelet[2727]: I1124 00:09:34.460682 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c3b5570-6e8e-4dde-a8cc-68d3764a8e55-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kvnd7\" (UID: \"0c3b5570-6e8e-4dde-a8cc-68d3764a8e55\") " pod="tigera-operator/tigera-operator-7dcd859c48-kvnd7" Nov 24 00:09:34.461182 kubelet[2727]: I1124 00:09:34.460881 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9pz\" (UniqueName: \"kubernetes.io/projected/0c3b5570-6e8e-4dde-a8cc-68d3764a8e55-kube-api-access-wx9pz\") pod \"tigera-operator-7dcd859c48-kvnd7\" (UID: \"0c3b5570-6e8e-4dde-a8cc-68d3764a8e55\") " pod="tigera-operator/tigera-operator-7dcd859c48-kvnd7" Nov 24 00:09:34.470418 containerd[1554]: time="2025-11-24T00:09:34.470223401Z" level=info msg="StartContainer for \"615d3dbd43ca475efd78b8c220bc828f31e40b1cc508a808e9cea4b0803c2028\" returns successfully" Nov 24 00:09:34.603879 containerd[1554]: time="2025-11-24T00:09:34.603818517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kvnd7,Uid:0c3b5570-6e8e-4dde-a8cc-68d3764a8e55,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:09:34.619450 containerd[1554]: time="2025-11-24T00:09:34.619404714Z" level=info msg="connecting to shim 9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b" address="unix:///run/containerd/s/ac8b9edafb70fbf52d86e972e3bc8dac20ec38c2e2f29d142d412bbde9cb40d6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:34.644609 systemd[1]: Started cri-containerd-9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b.scope - libcontainer container 9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b. Nov 24 00:09:34.692056 containerd[1554]: time="2025-11-24T00:09:34.691975204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kvnd7,Uid:0c3b5570-6e8e-4dde-a8cc-68d3764a8e55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b\"" Nov 24 00:09:34.693658 containerd[1554]: time="2025-11-24T00:09:34.693600682Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:09:35.088227 kubelet[2727]: E1124 00:09:35.088172 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:35.537653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334613162.mount: Deactivated successfully. Nov 24 00:09:36.124817 containerd[1554]: time="2025-11-24T00:09:36.124763492Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:36.125747 containerd[1554]: time="2025-11-24T00:09:36.125555855Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:09:36.126332 containerd[1554]: time="2025-11-24T00:09:36.126304371Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:36.128063 containerd[1554]: time="2025-11-24T00:09:36.128035915Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:36.128672 containerd[1554]: time="2025-11-24T00:09:36.128649601Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.434884791s" Nov 24 00:09:36.128738 containerd[1554]: time="2025-11-24T00:09:36.128724126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:09:36.132734 containerd[1554]: time="2025-11-24T00:09:36.132704628Z" level=info msg="CreateContainer within sandbox \"9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:09:36.138499 containerd[1554]: time="2025-11-24T00:09:36.137239219Z" level=info msg="Container 7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:36.155509 containerd[1554]: time="2025-11-24T00:09:36.155457601Z" level=info msg="CreateContainer within sandbox \"9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f\"" Nov 24 00:09:36.156130 containerd[1554]: time="2025-11-24T00:09:36.156008061Z" level=info msg="StartContainer for \"7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f\"" Nov 24 00:09:36.158266 containerd[1554]: time="2025-11-24T00:09:36.158063532Z" level=info msg="connecting to shim 7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f" address="unix:///run/containerd/s/ac8b9edafb70fbf52d86e972e3bc8dac20ec38c2e2f29d142d412bbde9cb40d6" protocol=ttrpc version=3 Nov 24 00:09:36.185607 systemd[1]: Started cri-containerd-7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f.scope - libcontainer container 7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f. Nov 24 00:09:36.218148 containerd[1554]: time="2025-11-24T00:09:36.218062771Z" level=info msg="StartContainer for \"7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f\" returns successfully" Nov 24 00:09:37.098498 kubelet[2727]: I1124 00:09:37.098429 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-blnsn" podStartSLOduration=4.09841306 podStartE2EDuration="4.09841306s" podCreationTimestamp="2025-11-24 00:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:09:35.096789085 +0000 UTC m=+6.158659829" watchObservedRunningTime="2025-11-24 00:09:37.09841306 +0000 UTC m=+8.160283804" Nov 24 00:09:37.099529 kubelet[2727]: I1124 00:09:37.099502 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kvnd7" podStartSLOduration=1.663304094 podStartE2EDuration="3.099497046s" podCreationTimestamp="2025-11-24 00:09:34 +0000 UTC" firstStartedPulling="2025-11-24 00:09:34.693240022 +0000 UTC m=+5.755110766" lastFinishedPulling="2025-11-24 00:09:36.129432964 +0000 UTC m=+7.191303718" observedRunningTime="2025-11-24 00:09:37.099488166 +0000 UTC m=+8.161358910" watchObservedRunningTime="2025-11-24 00:09:37.099497046 +0000 UTC m=+8.161367790" Nov 24 00:09:38.700593 containerd[1554]: time="2025-11-24T00:09:38.699745155Z" level=info msg="received container exit event container_id:\"7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f\" id:\"7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f\" pid:3052 exit_status:1 exited_at:{seconds:1763942978 nanos:699392187}" Nov 24 00:09:38.699887 systemd[1]: cri-containerd-7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f.scope: Deactivated successfully. Nov 24 00:09:38.740454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f-rootfs.mount: Deactivated successfully. Nov 24 00:09:39.100475 kubelet[2727]: I1124 00:09:39.099320 2727 scope.go:117] "RemoveContainer" containerID="7a316dfb03eb3e5c1ae8442c535bf1219b274e2cd13e735b3d6ca739187cad7f" Nov 24 00:09:39.105966 containerd[1554]: time="2025-11-24T00:09:39.104176204Z" level=info msg="CreateContainer within sandbox \"9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 24 00:09:39.130607 containerd[1554]: time="2025-11-24T00:09:39.125633279Z" level=info msg="Container 0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:39.140698 containerd[1554]: time="2025-11-24T00:09:39.140643999Z" level=info msg="CreateContainer within sandbox \"9693884ed241adb89aa07feb038e5e402c6def92fbaf50dfa402d6f8adaa912b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154\"" Nov 24 00:09:39.142246 containerd[1554]: time="2025-11-24T00:09:39.141349635Z" level=info msg="StartContainer for \"0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154\"" Nov 24 00:09:39.144757 containerd[1554]: time="2025-11-24T00:09:39.143447497Z" level=info msg="connecting to shim 0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154" address="unix:///run/containerd/s/ac8b9edafb70fbf52d86e972e3bc8dac20ec38c2e2f29d142d412bbde9cb40d6" protocol=ttrpc version=3 Nov 24 00:09:39.188766 systemd[1]: Started cri-containerd-0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154.scope - libcontainer container 0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154. Nov 24 00:09:39.206919 kubelet[2727]: E1124 00:09:39.206878 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:39.243389 containerd[1554]: time="2025-11-24T00:09:39.243351632Z" level=info msg="StartContainer for \"0759e88f197c9baf466bcdac96765a71a5d9f5a8b98f06640d636c47b1d59154\" returns successfully" Nov 24 00:09:39.562992 kubelet[2727]: E1124 00:09:39.562699 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:40.102854 kubelet[2727]: E1124 00:09:40.102603 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:41.339502 kubelet[2727]: E1124 00:09:41.337775 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:41.627601 sudo[1796]: pam_unix(sudo:session): session closed for user root Nov 24 00:09:41.675109 sshd[1795]: Connection closed by 147.75.109.163 port 59422 Nov 24 00:09:41.675599 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:41.679826 systemd[1]: sshd@6-172.237.134.153:22-147.75.109.163:59422.service: Deactivated successfully. Nov 24 00:09:41.682068 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:09:41.682382 systemd[1]: session-7.scope: Consumed 4.432s CPU time, 231.7M memory peak. Nov 24 00:09:41.684076 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:09:41.685511 systemd-logind[1530]: Removed session 7. Nov 24 00:09:42.106230 kubelet[2727]: E1124 00:09:42.106196 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:45.751162 update_engine[1535]: I20251124 00:09:45.750525 1535 update_attempter.cc:509] Updating boot flags... Nov 24 00:09:46.790517 systemd[1]: Created slice kubepods-besteffort-pod6716c98a_5789_4669_9ee9_e69b49ca84c4.slice - libcontainer container kubepods-besteffort-pod6716c98a_5789_4669_9ee9_e69b49ca84c4.slice. Nov 24 00:09:46.840411 kubelet[2727]: I1124 00:09:46.840380 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6716c98a-5789-4669-9ee9-e69b49ca84c4-typha-certs\") pod \"calico-typha-54fbcff868-spkt4\" (UID: \"6716c98a-5789-4669-9ee9-e69b49ca84c4\") " pod="calico-system/calico-typha-54fbcff868-spkt4" Nov 24 00:09:46.841191 kubelet[2727]: I1124 00:09:46.841106 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6dw\" (UniqueName: \"kubernetes.io/projected/6716c98a-5789-4669-9ee9-e69b49ca84c4-kube-api-access-7k6dw\") pod \"calico-typha-54fbcff868-spkt4\" (UID: \"6716c98a-5789-4669-9ee9-e69b49ca84c4\") " pod="calico-system/calico-typha-54fbcff868-spkt4" Nov 24 00:09:46.841191 kubelet[2727]: I1124 00:09:46.841132 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6716c98a-5789-4669-9ee9-e69b49ca84c4-tigera-ca-bundle\") pod \"calico-typha-54fbcff868-spkt4\" (UID: \"6716c98a-5789-4669-9ee9-e69b49ca84c4\") " pod="calico-system/calico-typha-54fbcff868-spkt4" Nov 24 00:09:46.982111 systemd[1]: Created slice kubepods-besteffort-pod8df332b7_49f1_4fbb_9ca6_a321d297cd35.slice - libcontainer container kubepods-besteffort-pod8df332b7_49f1_4fbb_9ca6_a321d297cd35.slice. Nov 24 00:09:47.043196 kubelet[2727]: I1124 00:09:47.042946 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-cni-log-dir\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.043196 kubelet[2727]: I1124 00:09:47.043040 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-flexvol-driver-host\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.043196 kubelet[2727]: I1124 00:09:47.043057 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8df332b7-49f1-4fbb-9ca6-a321d297cd35-tigera-ca-bundle\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.043196 kubelet[2727]: I1124 00:09:47.043073 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-cni-net-dir\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.043196 kubelet[2727]: I1124 00:09:47.043128 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-lib-modules\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044639 kubelet[2727]: I1124 00:09:47.043144 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-policysync\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044639 kubelet[2727]: I1124 00:09:47.043160 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-cni-bin-dir\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044639 kubelet[2727]: I1124 00:09:47.043419 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqstw\" (UniqueName: \"kubernetes.io/projected/8df332b7-49f1-4fbb-9ca6-a321d297cd35-kube-api-access-fqstw\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044639 kubelet[2727]: I1124 00:09:47.043440 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-var-run-calico\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044639 kubelet[2727]: I1124 00:09:47.043515 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8df332b7-49f1-4fbb-9ca6-a321d297cd35-node-certs\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044751 kubelet[2727]: I1124 00:09:47.043543 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-var-lib-calico\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.044751 kubelet[2727]: I1124 00:09:47.043584 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df332b7-49f1-4fbb-9ca6-a321d297cd35-xtables-lock\") pod \"calico-node-xhw89\" (UID: \"8df332b7-49f1-4fbb-9ca6-a321d297cd35\") " pod="calico-system/calico-node-xhw89" Nov 24 00:09:47.096295 kubelet[2727]: E1124 00:09:47.096244 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:47.097006 containerd[1554]: time="2025-11-24T00:09:47.096911334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fbcff868-spkt4,Uid:6716c98a-5789-4669-9ee9-e69b49ca84c4,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:47.116140 containerd[1554]: time="2025-11-24T00:09:47.115987943Z" level=info msg="connecting to shim 3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034" address="unix:///run/containerd/s/3a928b71d5ed9f59c5d28a3519374b034ef8b55aa88dcb810d6dcb8b91cb9b2e" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:47.149437 kubelet[2727]: E1124 00:09:47.149103 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.149437 kubelet[2727]: W1124 00:09:47.149147 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.149437 kubelet[2727]: E1124 00:09:47.149167 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.149624 systemd[1]: Started cri-containerd-3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034.scope - libcontainer container 3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034. Nov 24 00:09:47.150790 kubelet[2727]: E1124 00:09:47.150744 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.153639 kubelet[2727]: W1124 00:09:47.153624 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.154093 kubelet[2727]: E1124 00:09:47.154080 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.155182 kubelet[2727]: E1124 00:09:47.155148 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.155768 kubelet[2727]: W1124 00:09:47.155696 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.155768 kubelet[2727]: E1124 00:09:47.155713 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.157060 kubelet[2727]: E1124 00:09:47.156951 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.157998 kubelet[2727]: W1124 00:09:47.157924 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.157998 kubelet[2727]: E1124 00:09:47.157941 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.159382 kubelet[2727]: E1124 00:09:47.159370 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.159560 kubelet[2727]: W1124 00:09:47.159536 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.159649 kubelet[2727]: E1124 00:09:47.159638 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.160393 kubelet[2727]: E1124 00:09:47.160343 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.160802 kubelet[2727]: W1124 00:09:47.160649 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.160940 kubelet[2727]: E1124 00:09:47.160883 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.162056 kubelet[2727]: E1124 00:09:47.161994 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.162056 kubelet[2727]: W1124 00:09:47.162013 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.162056 kubelet[2727]: E1124 00:09:47.162022 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.163687 kubelet[2727]: E1124 00:09:47.163648 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.163687 kubelet[2727]: W1124 00:09:47.163660 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.164479 kubelet[2727]: E1124 00:09:47.163669 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.165061 kubelet[2727]: E1124 00:09:47.164974 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.165061 kubelet[2727]: W1124 00:09:47.165003 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.165061 kubelet[2727]: E1124 00:09:47.165028 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.166154 kubelet[2727]: E1124 00:09:47.166122 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.166154 kubelet[2727]: W1124 00:09:47.166136 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.166154 kubelet[2727]: E1124 00:09:47.166146 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.166711 kubelet[2727]: E1124 00:09:47.166689 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.166711 kubelet[2727]: W1124 00:09:47.166704 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.166711 kubelet[2727]: E1124 00:09:47.166714 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.167263 kubelet[2727]: E1124 00:09:47.167242 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.167263 kubelet[2727]: W1124 00:09:47.167258 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.167575 kubelet[2727]: E1124 00:09:47.167268 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.174019 kubelet[2727]: E1124 00:09:47.174004 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.174107 kubelet[2727]: W1124 00:09:47.174096 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.174205 kubelet[2727]: E1124 00:09:47.174152 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.175207 kubelet[2727]: E1124 00:09:47.175196 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.175288 kubelet[2727]: W1124 00:09:47.175277 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.175358 kubelet[2727]: E1124 00:09:47.175331 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.187053 kubelet[2727]: E1124 00:09:47.187016 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:09:47.236735 kubelet[2727]: E1124 00:09:47.236671 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.236735 kubelet[2727]: W1124 00:09:47.236690 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.236735 kubelet[2727]: E1124 00:09:47.236706 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.237915 kubelet[2727]: E1124 00:09:47.237786 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.237915 kubelet[2727]: W1124 00:09:47.237799 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.237915 kubelet[2727]: E1124 00:09:47.237809 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.238533 kubelet[2727]: E1124 00:09:47.238388 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.238533 kubelet[2727]: W1124 00:09:47.238400 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.238533 kubelet[2727]: E1124 00:09:47.238409 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.239069 kubelet[2727]: E1124 00:09:47.238962 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.239069 kubelet[2727]: W1124 00:09:47.238973 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.239069 kubelet[2727]: E1124 00:09:47.238993 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.239721 kubelet[2727]: E1124 00:09:47.239605 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.239721 kubelet[2727]: W1124 00:09:47.239650 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.239721 kubelet[2727]: E1124 00:09:47.239659 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.240165 kubelet[2727]: E1124 00:09:47.240134 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.241723 kubelet[2727]: W1124 00:09:47.240145 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.241723 kubelet[2727]: E1124 00:09:47.240214 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.241723 kubelet[2727]: E1124 00:09:47.241598 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.241723 kubelet[2727]: W1124 00:09:47.241606 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.241723 kubelet[2727]: E1124 00:09:47.241629 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.242376 kubelet[2727]: E1124 00:09:47.242083 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.242376 kubelet[2727]: W1124 00:09:47.242098 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.242376 kubelet[2727]: E1124 00:09:47.242107 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.242376 kubelet[2727]: E1124 00:09:47.242325 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.242376 kubelet[2727]: W1124 00:09:47.242332 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.242376 kubelet[2727]: E1124 00:09:47.242340 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.242882 kubelet[2727]: E1124 00:09:47.242870 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.242941 kubelet[2727]: W1124 00:09:47.242931 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.242984 kubelet[2727]: E1124 00:09:47.242975 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.243233 kubelet[2727]: E1124 00:09:47.243222 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.243288 kubelet[2727]: W1124 00:09:47.243279 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.243342 kubelet[2727]: E1124 00:09:47.243332 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.243635 kubelet[2727]: E1124 00:09:47.243624 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.243685 kubelet[2727]: W1124 00:09:47.243675 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.243723 kubelet[2727]: E1124 00:09:47.243715 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.245246 kubelet[2727]: E1124 00:09:47.245135 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.245246 kubelet[2727]: W1124 00:09:47.245151 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.245246 kubelet[2727]: E1124 00:09:47.245161 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.245567 kubelet[2727]: E1124 00:09:47.245387 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.245567 kubelet[2727]: W1124 00:09:47.245398 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.245567 kubelet[2727]: E1124 00:09:47.245407 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.245709 kubelet[2727]: E1124 00:09:47.245699 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.245754 kubelet[2727]: W1124 00:09:47.245745 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.245813 kubelet[2727]: E1124 00:09:47.245803 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.246101 kubelet[2727]: E1124 00:09:47.246091 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.246163 kubelet[2727]: W1124 00:09:47.246152 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.246210 kubelet[2727]: E1124 00:09:47.246199 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.246525 kubelet[2727]: E1124 00:09:47.246515 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.247094 kubelet[2727]: W1124 00:09:47.246945 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.247094 kubelet[2727]: E1124 00:09:47.246960 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.247488 kubelet[2727]: E1124 00:09:47.247421 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.247561 kubelet[2727]: W1124 00:09:47.247550 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.247618 kubelet[2727]: E1124 00:09:47.247609 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.247990 kubelet[2727]: E1124 00:09:47.247978 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.248144 kubelet[2727]: W1124 00:09:47.248131 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.248249 kubelet[2727]: E1124 00:09:47.248238 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.248667 kubelet[2727]: E1124 00:09:47.248643 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.248805 kubelet[2727]: W1124 00:09:47.248779 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.248859 kubelet[2727]: E1124 00:09:47.248850 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.249662 kubelet[2727]: E1124 00:09:47.249606 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.249814 kubelet[2727]: W1124 00:09:47.249752 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.249814 kubelet[2727]: E1124 00:09:47.249768 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.250109 kubelet[2727]: I1124 00:09:47.249907 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63922d09-5f16-43ef-bdc3-f819f707f5b0-socket-dir\") pod \"csi-node-driver-r4dwf\" (UID: \"63922d09-5f16-43ef-bdc3-f819f707f5b0\") " pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:47.250752 kubelet[2727]: E1124 00:09:47.250718 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.250752 kubelet[2727]: W1124 00:09:47.250730 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.250752 kubelet[2727]: E1124 00:09:47.250739 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.251222 kubelet[2727]: I1124 00:09:47.251018 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63922d09-5f16-43ef-bdc3-f819f707f5b0-kubelet-dir\") pod \"csi-node-driver-r4dwf\" (UID: \"63922d09-5f16-43ef-bdc3-f819f707f5b0\") " pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:47.251916 kubelet[2727]: E1124 00:09:47.251858 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.251916 kubelet[2727]: W1124 00:09:47.251892 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.251916 kubelet[2727]: E1124 00:09:47.251903 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.252203 kubelet[2727]: I1124 00:09:47.252176 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/63922d09-5f16-43ef-bdc3-f819f707f5b0-varrun\") pod \"csi-node-driver-r4dwf\" (UID: \"63922d09-5f16-43ef-bdc3-f819f707f5b0\") " pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:47.252702 kubelet[2727]: E1124 00:09:47.252641 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.252702 kubelet[2727]: W1124 00:09:47.252652 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.252702 kubelet[2727]: E1124 00:09:47.252660 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.253100 kubelet[2727]: E1124 00:09:47.253081 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.253209 kubelet[2727]: W1124 00:09:47.253177 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.253209 kubelet[2727]: E1124 00:09:47.253190 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.253531 containerd[1554]: time="2025-11-24T00:09:47.253371790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54fbcff868-spkt4,Uid:6716c98a-5789-4669-9ee9-e69b49ca84c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034\"" Nov 24 00:09:47.254024 kubelet[2727]: E1124 00:09:47.254011 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.254228 kubelet[2727]: W1124 00:09:47.254115 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.254228 kubelet[2727]: E1124 00:09:47.254144 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.254701 kubelet[2727]: E1124 00:09:47.254603 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.254701 kubelet[2727]: W1124 00:09:47.254613 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.254701 kubelet[2727]: E1124 00:09:47.254622 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.254823 kubelet[2727]: E1124 00:09:47.254810 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:47.255205 kubelet[2727]: E1124 00:09:47.255094 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.255205 kubelet[2727]: W1124 00:09:47.255105 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.255205 kubelet[2727]: E1124 00:09:47.255113 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.255205 kubelet[2727]: I1124 00:09:47.255131 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63922d09-5f16-43ef-bdc3-f819f707f5b0-registration-dir\") pod \"csi-node-driver-r4dwf\" (UID: \"63922d09-5f16-43ef-bdc3-f819f707f5b0\") " pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:47.255669 kubelet[2727]: E1124 00:09:47.255554 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.255669 kubelet[2727]: W1124 00:09:47.255564 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.255669 kubelet[2727]: E1124 00:09:47.255573 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.255669 kubelet[2727]: I1124 00:09:47.255586 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcrg\" (UniqueName: \"kubernetes.io/projected/63922d09-5f16-43ef-bdc3-f819f707f5b0-kube-api-access-xjcrg\") pod \"csi-node-driver-r4dwf\" (UID: \"63922d09-5f16-43ef-bdc3-f819f707f5b0\") " pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:47.255966 kubelet[2727]: E1124 00:09:47.255935 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.255966 kubelet[2727]: W1124 00:09:47.255946 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.255966 kubelet[2727]: E1124 00:09:47.255954 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.256722 kubelet[2727]: E1124 00:09:47.256711 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.256882 kubelet[2727]: W1124 00:09:47.256855 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.256882 kubelet[2727]: E1124 00:09:47.256869 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.257255 containerd[1554]: time="2025-11-24T00:09:47.257074400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:09:47.257651 kubelet[2727]: E1124 00:09:47.257618 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.257722 kubelet[2727]: W1124 00:09:47.257697 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.257722 kubelet[2727]: E1124 00:09:47.257711 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.258789 kubelet[2727]: E1124 00:09:47.258299 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.258789 kubelet[2727]: W1124 00:09:47.258310 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.258789 kubelet[2727]: E1124 00:09:47.258318 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.260022 kubelet[2727]: E1124 00:09:47.259868 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.260022 kubelet[2727]: W1124 00:09:47.259879 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.260022 kubelet[2727]: E1124 00:09:47.259888 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.260150 kubelet[2727]: E1124 00:09:47.260138 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.260225 kubelet[2727]: W1124 00:09:47.260216 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.260691 kubelet[2727]: E1124 00:09:47.260534 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.287002 kubelet[2727]: E1124 00:09:47.286985 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:47.287742 containerd[1554]: time="2025-11-24T00:09:47.287716251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xhw89,Uid:8df332b7-49f1-4fbb-9ca6-a321d297cd35,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:47.317287 containerd[1554]: time="2025-11-24T00:09:47.306906205Z" level=info msg="connecting to shim c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064" address="unix:///run/containerd/s/1b7f655eb3f4880d88b0bdbb2c1f4a052d25d3b743b758b523099da5fc90d08f" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:47.356973 kubelet[2727]: E1124 00:09:47.356953 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.357087 kubelet[2727]: W1124 00:09:47.357073 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.357189 kubelet[2727]: E1124 00:09:47.357176 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.357708 kubelet[2727]: E1124 00:09:47.357697 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.357792 kubelet[2727]: W1124 00:09:47.357782 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.357857 kubelet[2727]: E1124 00:09:47.357829 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.358178 kubelet[2727]: E1124 00:09:47.358131 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.358178 kubelet[2727]: W1124 00:09:47.358141 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.358178 kubelet[2727]: E1124 00:09:47.358150 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.358542 kubelet[2727]: E1124 00:09:47.358519 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.358542 kubelet[2727]: W1124 00:09:47.358540 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.358669 kubelet[2727]: E1124 00:09:47.358560 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.358817 kubelet[2727]: E1124 00:09:47.358799 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.358863 kubelet[2727]: W1124 00:09:47.358853 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.358940 kubelet[2727]: E1124 00:09:47.358906 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.359250 kubelet[2727]: E1124 00:09:47.359239 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.359314 kubelet[2727]: W1124 00:09:47.359303 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.359364 kubelet[2727]: E1124 00:09:47.359355 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.359618 systemd[1]: Started cri-containerd-c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064.scope - libcontainer container c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064. Nov 24 00:09:47.360817 kubelet[2727]: E1124 00:09:47.360784 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.360817 kubelet[2727]: W1124 00:09:47.360795 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.360817 kubelet[2727]: E1124 00:09:47.360804 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.361236 kubelet[2727]: E1124 00:09:47.361224 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.361442 kubelet[2727]: W1124 00:09:47.361429 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.361548 kubelet[2727]: E1124 00:09:47.361537 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.362414 kubelet[2727]: E1124 00:09:47.362350 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.362414 kubelet[2727]: W1124 00:09:47.362361 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.362414 kubelet[2727]: E1124 00:09:47.362372 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.363343 kubelet[2727]: E1124 00:09:47.363283 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.363500 kubelet[2727]: W1124 00:09:47.363444 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.364705 kubelet[2727]: E1124 00:09:47.364617 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.365800 kubelet[2727]: E1124 00:09:47.365676 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.365800 kubelet[2727]: W1124 00:09:47.365692 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.365800 kubelet[2727]: E1124 00:09:47.365703 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.365890 kubelet[2727]: E1124 00:09:47.365875 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.365890 kubelet[2727]: W1124 00:09:47.365883 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.365929 kubelet[2727]: E1124 00:09:47.365891 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.366597 kubelet[2727]: E1124 00:09:47.366551 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.366597 kubelet[2727]: W1124 00:09:47.366567 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.366597 kubelet[2727]: E1124 00:09:47.366576 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.366965 kubelet[2727]: E1124 00:09:47.366946 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.366965 kubelet[2727]: W1124 00:09:47.366963 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.367019 kubelet[2727]: E1124 00:09:47.366973 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.367552 kubelet[2727]: E1124 00:09:47.367516 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.367552 kubelet[2727]: W1124 00:09:47.367532 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.367552 kubelet[2727]: E1124 00:09:47.367541 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.367836 kubelet[2727]: E1124 00:09:47.367778 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.367836 kubelet[2727]: W1124 00:09:47.367831 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.367995 kubelet[2727]: E1124 00:09:47.367845 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.368300 kubelet[2727]: E1124 00:09:47.368265 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.368300 kubelet[2727]: W1124 00:09:47.368282 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.368300 kubelet[2727]: E1124 00:09:47.368291 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.368567 kubelet[2727]: E1124 00:09:47.368545 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.368567 kubelet[2727]: W1124 00:09:47.368559 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.368617 kubelet[2727]: E1124 00:09:47.368575 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.368900 kubelet[2727]: E1124 00:09:47.368854 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.368900 kubelet[2727]: W1124 00:09:47.368870 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.368900 kubelet[2727]: E1124 00:09:47.368889 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.369130 kubelet[2727]: E1124 00:09:47.369101 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.369130 kubelet[2727]: W1124 00:09:47.369108 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.369130 kubelet[2727]: E1124 00:09:47.369116 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.369407 kubelet[2727]: E1124 00:09:47.369323 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.369407 kubelet[2727]: W1124 00:09:47.369331 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.369407 kubelet[2727]: E1124 00:09:47.369339 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.369624 kubelet[2727]: E1124 00:09:47.369600 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.369624 kubelet[2727]: W1124 00:09:47.369615 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.369624 kubelet[2727]: E1124 00:09:47.369623 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.370405 kubelet[2727]: E1124 00:09:47.370378 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.370405 kubelet[2727]: W1124 00:09:47.370401 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.370522 kubelet[2727]: E1124 00:09:47.370415 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.371166 kubelet[2727]: E1124 00:09:47.371141 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.371166 kubelet[2727]: W1124 00:09:47.371159 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.371166 kubelet[2727]: E1124 00:09:47.371168 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.371567 kubelet[2727]: E1124 00:09:47.371547 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.371567 kubelet[2727]: W1124 00:09:47.371563 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.371751 kubelet[2727]: E1124 00:09:47.371572 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.378702 kubelet[2727]: E1124 00:09:47.378659 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:47.378816 kubelet[2727]: W1124 00:09:47.378783 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:47.378816 kubelet[2727]: E1124 00:09:47.378801 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:47.397274 containerd[1554]: time="2025-11-24T00:09:47.397243034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xhw89,Uid:8df332b7-49f1-4fbb-9ca6-a321d297cd35,Namespace:calico-system,Attempt:0,} returns sandbox id \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\"" Nov 24 00:09:47.397877 kubelet[2727]: E1124 00:09:47.397850 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:48.108769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221603474.mount: Deactivated successfully. Nov 24 00:09:48.853081 containerd[1554]: time="2025-11-24T00:09:48.853020890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:48.853845 containerd[1554]: time="2025-11-24T00:09:48.853818399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:09:48.854703 containerd[1554]: time="2025-11-24T00:09:48.854674216Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:48.856184 containerd[1554]: time="2025-11-24T00:09:48.856153709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:48.856825 containerd[1554]: time="2025-11-24T00:09:48.856797894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.599702896s" Nov 24 00:09:48.856924 containerd[1554]: time="2025-11-24T00:09:48.856909200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:09:48.857780 containerd[1554]: time="2025-11-24T00:09:48.857740928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:09:48.872960 containerd[1554]: time="2025-11-24T00:09:48.872928393Z" level=info msg="CreateContainer within sandbox \"3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:09:48.880046 containerd[1554]: time="2025-11-24T00:09:48.879754701Z" level=info msg="Container 578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:48.885511 containerd[1554]: time="2025-11-24T00:09:48.885451691Z" level=info msg="CreateContainer within sandbox \"3e7b5047b6783863bb14d4d567d2d462c34592290e8e7d923abc02d5f7ed0034\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5\"" Nov 24 00:09:48.886383 containerd[1554]: time="2025-11-24T00:09:48.886352797Z" level=info msg="StartContainer for \"578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5\"" Nov 24 00:09:48.888241 containerd[1554]: time="2025-11-24T00:09:48.888211725Z" level=info msg="connecting to shim 578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5" address="unix:///run/containerd/s/3a928b71d5ed9f59c5d28a3519374b034ef8b55aa88dcb810d6dcb8b91cb9b2e" protocol=ttrpc version=3 Nov 24 00:09:48.914599 systemd[1]: Started cri-containerd-578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5.scope - libcontainer container 578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5. Nov 24 00:09:48.980061 containerd[1554]: time="2025-11-24T00:09:48.980018191Z" level=info msg="StartContainer for \"578b19f4eb73bcb60036a170507eab7e9c176d4381c174c65d7d6e8423cad0d5\" returns successfully" Nov 24 00:09:49.035408 kubelet[2727]: E1124 00:09:49.035359 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:09:49.127544 kubelet[2727]: E1124 00:09:49.127413 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:49.136145 kubelet[2727]: I1124 00:09:49.136076 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54fbcff868-spkt4" podStartSLOduration=1.535233797 podStartE2EDuration="3.136065868s" podCreationTimestamp="2025-11-24 00:09:46 +0000 UTC" firstStartedPulling="2025-11-24 00:09:47.25656825 +0000 UTC m=+18.318438994" lastFinishedPulling="2025-11-24 00:09:48.857400321 +0000 UTC m=+19.919271065" observedRunningTime="2025-11-24 00:09:49.13601258 +0000 UTC m=+20.197883324" watchObservedRunningTime="2025-11-24 00:09:49.136065868 +0000 UTC m=+20.197936612" Nov 24 00:09:49.175115 kubelet[2727]: E1124 00:09:49.174972 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.175182 kubelet[2727]: W1124 00:09:49.175122 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.175841 kubelet[2727]: E1124 00:09:49.175141 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.177211 kubelet[2727]: E1124 00:09:49.177184 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.177524 kubelet[2727]: W1124 00:09:49.177439 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.177524 kubelet[2727]: E1124 00:09:49.177478 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.177832 kubelet[2727]: E1124 00:09:49.177681 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.177870 kubelet[2727]: W1124 00:09:49.177824 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.177870 kubelet[2727]: E1124 00:09:49.177850 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.179575 kubelet[2727]: E1124 00:09:49.179550 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.179575 kubelet[2727]: W1124 00:09:49.179567 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.179575 kubelet[2727]: E1124 00:09:49.179577 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.179995 kubelet[2727]: E1124 00:09:49.179972 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.179995 kubelet[2727]: W1124 00:09:49.179987 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.179995 kubelet[2727]: E1124 00:09:49.179996 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.181820 kubelet[2727]: E1124 00:09:49.181794 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.181820 kubelet[2727]: W1124 00:09:49.181812 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.181820 kubelet[2727]: E1124 00:09:49.181821 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.183951 kubelet[2727]: E1124 00:09:49.183878 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.183951 kubelet[2727]: W1124 00:09:49.183891 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.184142 kubelet[2727]: E1124 00:09:49.183901 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.184713 kubelet[2727]: E1124 00:09:49.184689 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.184713 kubelet[2727]: W1124 00:09:49.184705 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.184713 kubelet[2727]: E1124 00:09:49.184714 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.185250 kubelet[2727]: E1124 00:09:49.185228 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.185628 kubelet[2727]: W1124 00:09:49.185263 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.185628 kubelet[2727]: E1124 00:09:49.185273 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.185628 kubelet[2727]: E1124 00:09:49.185497 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.185628 kubelet[2727]: W1124 00:09:49.185505 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.185628 kubelet[2727]: E1124 00:09:49.185513 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.186316 kubelet[2727]: E1124 00:09:49.185709 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.186316 kubelet[2727]: W1124 00:09:49.185717 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.186316 kubelet[2727]: E1124 00:09:49.185724 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.186316 kubelet[2727]: E1124 00:09:49.186264 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.186316 kubelet[2727]: W1124 00:09:49.186273 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.186316 kubelet[2727]: E1124 00:09:49.186282 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.186546 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.187658 kubelet[2727]: W1124 00:09:49.186557 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.186584 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.186791 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.187658 kubelet[2727]: W1124 00:09:49.186798 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.186824 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.187043 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.187658 kubelet[2727]: W1124 00:09:49.187051 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.187058 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.187658 kubelet[2727]: E1124 00:09:49.187341 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.187907 kubelet[2727]: W1124 00:09:49.187349 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.187907 kubelet[2727]: E1124 00:09:49.187357 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.187907 kubelet[2727]: E1124 00:09:49.187635 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.187907 kubelet[2727]: W1124 00:09:49.187643 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.187907 kubelet[2727]: E1124 00:09:49.187651 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.188263 kubelet[2727]: E1124 00:09:49.188239 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.188297 kubelet[2727]: W1124 00:09:49.188273 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.188297 kubelet[2727]: E1124 00:09:49.188282 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.188586 kubelet[2727]: E1124 00:09:49.188551 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.188586 kubelet[2727]: W1124 00:09:49.188562 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.188586 kubelet[2727]: E1124 00:09:49.188571 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.189613 kubelet[2727]: E1124 00:09:49.189570 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.189613 kubelet[2727]: W1124 00:09:49.189583 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.189613 kubelet[2727]: E1124 00:09:49.189612 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.189992 kubelet[2727]: E1124 00:09:49.189863 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.189992 kubelet[2727]: W1124 00:09:49.189909 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.189992 kubelet[2727]: E1124 00:09:49.189917 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.190378 kubelet[2727]: E1124 00:09:49.190125 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.190378 kubelet[2727]: W1124 00:09:49.190133 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.190378 kubelet[2727]: E1124 00:09:49.190142 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.191258 kubelet[2727]: E1124 00:09:49.190591 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.191258 kubelet[2727]: W1124 00:09:49.190602 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.191258 kubelet[2727]: E1124 00:09:49.190610 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.191258 kubelet[2727]: E1124 00:09:49.190876 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.191258 kubelet[2727]: W1124 00:09:49.190883 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.191258 kubelet[2727]: E1124 00:09:49.190891 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.191405 kubelet[2727]: E1124 00:09:49.191298 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.191405 kubelet[2727]: W1124 00:09:49.191308 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.191405 kubelet[2727]: E1124 00:09:49.191316 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.192061 kubelet[2727]: E1124 00:09:49.191931 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.192061 kubelet[2727]: W1124 00:09:49.191949 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.192061 kubelet[2727]: E1124 00:09:49.191957 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.192385 kubelet[2727]: E1124 00:09:49.192146 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.192385 kubelet[2727]: W1124 00:09:49.192161 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.192385 kubelet[2727]: E1124 00:09:49.192169 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.192535 kubelet[2727]: E1124 00:09:49.192489 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.192535 kubelet[2727]: W1124 00:09:49.192498 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.192535 kubelet[2727]: E1124 00:09:49.192506 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.193029 kubelet[2727]: E1124 00:09:49.193006 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.193029 kubelet[2727]: W1124 00:09:49.193023 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.193100 kubelet[2727]: E1124 00:09:49.193033 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.194363 kubelet[2727]: E1124 00:09:49.194339 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.194363 kubelet[2727]: W1124 00:09:49.194356 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.194363 kubelet[2727]: E1124 00:09:49.194365 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.195758 kubelet[2727]: E1124 00:09:49.195708 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.195758 kubelet[2727]: W1124 00:09:49.195722 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.195758 kubelet[2727]: E1124 00:09:49.195732 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.197029 kubelet[2727]: E1124 00:09:49.196277 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.197029 kubelet[2727]: W1124 00:09:49.196384 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.197029 kubelet[2727]: E1124 00:09:49.196412 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.197535 kubelet[2727]: E1124 00:09:49.197523 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:09:49.197734 kubelet[2727]: W1124 00:09:49.197720 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:09:49.197870 kubelet[2727]: E1124 00:09:49.197859 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:09:49.453985 containerd[1554]: time="2025-11-24T00:09:49.453936558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:49.455753 containerd[1554]: time="2025-11-24T00:09:49.455725462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:09:49.455802 containerd[1554]: time="2025-11-24T00:09:49.455787510Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:49.457122 containerd[1554]: time="2025-11-24T00:09:49.457074643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:49.457891 containerd[1554]: time="2025-11-24T00:09:49.457597514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 599.830277ms" Nov 24 00:09:49.457891 containerd[1554]: time="2025-11-24T00:09:49.457626742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:09:49.460971 containerd[1554]: time="2025-11-24T00:09:49.460824975Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:09:49.468728 containerd[1554]: time="2025-11-24T00:09:49.468633379Z" level=info msg="Container e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:49.478898 containerd[1554]: time="2025-11-24T00:09:49.478862293Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429\"" Nov 24 00:09:49.479539 containerd[1554]: time="2025-11-24T00:09:49.479512110Z" level=info msg="StartContainer for \"e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429\"" Nov 24 00:09:49.480807 containerd[1554]: time="2025-11-24T00:09:49.480768314Z" level=info msg="connecting to shim e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429" address="unix:///run/containerd/s/1b7f655eb3f4880d88b0bdbb2c1f4a052d25d3b743b758b523099da5fc90d08f" protocol=ttrpc version=3 Nov 24 00:09:49.500599 systemd[1]: Started cri-containerd-e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429.scope - libcontainer container e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429. Nov 24 00:09:49.569274 containerd[1554]: time="2025-11-24T00:09:49.569203929Z" level=info msg="StartContainer for \"e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429\" returns successfully" Nov 24 00:09:49.584759 systemd[1]: cri-containerd-e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429.scope: Deactivated successfully. Nov 24 00:09:49.589581 containerd[1554]: time="2025-11-24T00:09:49.589549033Z" level=info msg="received container exit event container_id:\"e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429\" id:\"e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429\" pid:3484 exited_at:{seconds:1763942989 nanos:589240654}" Nov 24 00:09:49.616903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4cbf960c3f5436c4d44c9dc1e3e066b93ba92522070cf30ae6a25835bbd8429-rootfs.mount: Deactivated successfully. Nov 24 00:09:50.130363 kubelet[2727]: I1124 00:09:50.130301 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:09:50.132947 kubelet[2727]: E1124 00:09:50.130546 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:50.133108 containerd[1554]: time="2025-11-24T00:09:50.131836621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:09:50.134184 kubelet[2727]: E1124 00:09:50.133030 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:51.037957 kubelet[2727]: E1124 00:09:51.037775 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:09:51.702017 containerd[1554]: time="2025-11-24T00:09:51.701350643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:51.702017 containerd[1554]: time="2025-11-24T00:09:51.701983802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:09:51.702582 containerd[1554]: time="2025-11-24T00:09:51.702539934Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:51.707004 containerd[1554]: time="2025-11-24T00:09:51.706968966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:51.708822 containerd[1554]: time="2025-11-24T00:09:51.708788465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.576895396s" Nov 24 00:09:51.708869 containerd[1554]: time="2025-11-24T00:09:51.708824574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:09:51.712687 containerd[1554]: time="2025-11-24T00:09:51.712631107Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:09:51.721493 containerd[1554]: time="2025-11-24T00:09:51.720708117Z" level=info msg="Container af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:51.729451 containerd[1554]: time="2025-11-24T00:09:51.729396837Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca\"" Nov 24 00:09:51.730633 containerd[1554]: time="2025-11-24T00:09:51.730582417Z" level=info msg="StartContainer for \"af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca\"" Nov 24 00:09:51.732620 containerd[1554]: time="2025-11-24T00:09:51.732574631Z" level=info msg="connecting to shim af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca" address="unix:///run/containerd/s/1b7f655eb3f4880d88b0bdbb2c1f4a052d25d3b743b758b523099da5fc90d08f" protocol=ttrpc version=3 Nov 24 00:09:51.760787 systemd[1]: Started cri-containerd-af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca.scope - libcontainer container af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca. Nov 24 00:09:51.838638 containerd[1554]: time="2025-11-24T00:09:51.838539272Z" level=info msg="StartContainer for \"af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca\" returns successfully" Nov 24 00:09:52.138722 kubelet[2727]: E1124 00:09:52.138615 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:52.433779 containerd[1554]: time="2025-11-24T00:09:52.433637404Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:09:52.437040 systemd[1]: cri-containerd-af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca.scope: Deactivated successfully. Nov 24 00:09:52.438437 systemd[1]: cri-containerd-af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca.scope: Consumed 515ms CPU time, 192.9M memory peak, 171.3M written to disk. Nov 24 00:09:52.439673 containerd[1554]: time="2025-11-24T00:09:52.439435229Z" level=info msg="received container exit event container_id:\"af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca\" id:\"af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca\" pid:3542 exited_at:{seconds:1763942992 nanos:438336784}" Nov 24 00:09:52.462756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af45dfe88ba00dd4d2c841443d890511eec958385141def7c3d105fcfae90fca-rootfs.mount: Deactivated successfully. Nov 24 00:09:52.477137 kubelet[2727]: I1124 00:09:52.476986 2727 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:09:52.526978 systemd[1]: Created slice kubepods-burstable-pod0b32a61f_382f_4e7f_bc9f_f8456926fdc1.slice - libcontainer container kubepods-burstable-pod0b32a61f_382f_4e7f_bc9f_f8456926fdc1.slice. Nov 24 00:09:52.545294 systemd[1]: Created slice kubepods-besteffort-pod918b0245_1c27_4194_ac35_a7e394dba332.slice - libcontainer container kubepods-besteffort-pod918b0245_1c27_4194_ac35_a7e394dba332.slice. Nov 24 00:09:52.554337 systemd[1]: Created slice kubepods-besteffort-pod84972b9a_587c_4cc3_993d_8f4d81fe7493.slice - libcontainer container kubepods-besteffort-pod84972b9a_587c_4cc3_993d_8f4d81fe7493.slice. Nov 24 00:09:52.564648 systemd[1]: Created slice kubepods-besteffort-podb92dcaad_cbde_40da_94a7_6e0bac08ac02.slice - libcontainer container kubepods-besteffort-podb92dcaad_cbde_40da_94a7_6e0bac08ac02.slice. Nov 24 00:09:52.572611 systemd[1]: Created slice kubepods-besteffort-pod79029465_26e2_4032_b64d_59a7fac9f008.slice - libcontainer container kubepods-besteffort-pod79029465_26e2_4032_b64d_59a7fac9f008.slice. Nov 24 00:09:52.580379 systemd[1]: Created slice kubepods-besteffort-pod6278fcff_f964_4650_8bd0_1fe609bb44a0.slice - libcontainer container kubepods-besteffort-pod6278fcff_f964_4650_8bd0_1fe609bb44a0.slice. Nov 24 00:09:52.588773 systemd[1]: Created slice kubepods-burstable-podd8a094f4_f693_4b7a_a5c9_e53b2fb52dcb.slice - libcontainer container kubepods-burstable-podd8a094f4_f693_4b7a_a5c9_e53b2fb52dcb.slice. Nov 24 00:09:52.614326 kubelet[2727]: I1124 00:09:52.613844 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdmls\" (UniqueName: \"kubernetes.io/projected/0b32a61f-382f-4e7f-bc9f-f8456926fdc1-kube-api-access-sdmls\") pod \"coredns-674b8bbfcf-f2zgd\" (UID: \"0b32a61f-382f-4e7f-bc9f-f8456926fdc1\") " pod="kube-system/coredns-674b8bbfcf-f2zgd" Nov 24 00:09:52.614326 kubelet[2727]: I1124 00:09:52.613994 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84972b9a-587c-4cc3-993d-8f4d81fe7493-config\") pod \"goldmane-666569f655-bxlsg\" (UID: \"84972b9a-587c-4cc3-993d-8f4d81fe7493\") " pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:52.614326 kubelet[2727]: I1124 00:09:52.614031 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-backend-key-pair\") pod \"whisker-db54fb577-sf2vl\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " pod="calico-system/whisker-db54fb577-sf2vl" Nov 24 00:09:52.614326 kubelet[2727]: I1124 00:09:52.614049 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-ca-bundle\") pod \"whisker-db54fb577-sf2vl\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " pod="calico-system/whisker-db54fb577-sf2vl" Nov 24 00:09:52.614326 kubelet[2727]: I1124 00:09:52.614097 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b92dcaad-cbde-40da-94a7-6e0bac08ac02-calico-apiserver-certs\") pod \"calico-apiserver-556645b45d-t4ct5\" (UID: \"b92dcaad-cbde-40da-94a7-6e0bac08ac02\") " pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" Nov 24 00:09:52.614545 kubelet[2727]: I1124 00:09:52.614112 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb-config-volume\") pod \"coredns-674b8bbfcf-8zkl6\" (UID: \"d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb\") " pod="kube-system/coredns-674b8bbfcf-8zkl6" Nov 24 00:09:52.614545 kubelet[2727]: I1124 00:09:52.614132 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw82p\" (UniqueName: \"kubernetes.io/projected/918b0245-1c27-4194-ac35-a7e394dba332-kube-api-access-zw82p\") pod \"calico-apiserver-556645b45d-s8fp4\" (UID: \"918b0245-1c27-4194-ac35-a7e394dba332\") " pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" Nov 24 00:09:52.614545 kubelet[2727]: I1124 00:09:52.614267 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79029465-26e2-4032-b64d-59a7fac9f008-tigera-ca-bundle\") pod \"calico-kube-controllers-555b4c874c-4kfgm\" (UID: \"79029465-26e2-4032-b64d-59a7fac9f008\") " pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" Nov 24 00:09:52.614545 kubelet[2727]: I1124 00:09:52.614285 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptz8\" (UniqueName: \"kubernetes.io/projected/d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb-kube-api-access-4ptz8\") pod \"coredns-674b8bbfcf-8zkl6\" (UID: \"d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb\") " pod="kube-system/coredns-674b8bbfcf-8zkl6" Nov 24 00:09:52.614545 kubelet[2727]: I1124 00:09:52.614358 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84972b9a-587c-4cc3-993d-8f4d81fe7493-goldmane-ca-bundle\") pod \"goldmane-666569f655-bxlsg\" (UID: \"84972b9a-587c-4cc3-993d-8f4d81fe7493\") " pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:52.614657 kubelet[2727]: I1124 00:09:52.614489 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/84972b9a-587c-4cc3-993d-8f4d81fe7493-goldmane-key-pair\") pod \"goldmane-666569f655-bxlsg\" (UID: \"84972b9a-587c-4cc3-993d-8f4d81fe7493\") " pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:52.614657 kubelet[2727]: I1124 00:09:52.614532 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/918b0245-1c27-4194-ac35-a7e394dba332-calico-apiserver-certs\") pod \"calico-apiserver-556645b45d-s8fp4\" (UID: \"918b0245-1c27-4194-ac35-a7e394dba332\") " pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" Nov 24 00:09:52.614657 kubelet[2727]: I1124 00:09:52.614572 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzlp\" (UniqueName: \"kubernetes.io/projected/84972b9a-587c-4cc3-993d-8f4d81fe7493-kube-api-access-5rzlp\") pod \"goldmane-666569f655-bxlsg\" (UID: \"84972b9a-587c-4cc3-993d-8f4d81fe7493\") " pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:52.614657 kubelet[2727]: I1124 00:09:52.614592 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsbn\" (UniqueName: \"kubernetes.io/projected/b92dcaad-cbde-40da-94a7-6e0bac08ac02-kube-api-access-ktsbn\") pod \"calico-apiserver-556645b45d-t4ct5\" (UID: \"b92dcaad-cbde-40da-94a7-6e0bac08ac02\") " pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" Nov 24 00:09:52.614657 kubelet[2727]: I1124 00:09:52.614606 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvch\" (UniqueName: \"kubernetes.io/projected/79029465-26e2-4032-b64d-59a7fac9f008-kube-api-access-9wvch\") pod \"calico-kube-controllers-555b4c874c-4kfgm\" (UID: \"79029465-26e2-4032-b64d-59a7fac9f008\") " pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" Nov 24 00:09:52.614767 kubelet[2727]: I1124 00:09:52.614750 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwx7m\" (UniqueName: \"kubernetes.io/projected/6278fcff-f964-4650-8bd0-1fe609bb44a0-kube-api-access-xwx7m\") pod \"whisker-db54fb577-sf2vl\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " pod="calico-system/whisker-db54fb577-sf2vl" Nov 24 00:09:52.614792 kubelet[2727]: I1124 00:09:52.614774 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b32a61f-382f-4e7f-bc9f-f8456926fdc1-config-volume\") pod \"coredns-674b8bbfcf-f2zgd\" (UID: \"0b32a61f-382f-4e7f-bc9f-f8456926fdc1\") " pod="kube-system/coredns-674b8bbfcf-f2zgd" Nov 24 00:09:52.833803 kubelet[2727]: E1124 00:09:52.833732 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:52.835517 containerd[1554]: time="2025-11-24T00:09:52.835457975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2zgd,Uid:0b32a61f-382f-4e7f-bc9f-f8456926fdc1,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:52.854158 containerd[1554]: time="2025-11-24T00:09:52.852899298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-s8fp4,Uid:918b0245-1c27-4194-ac35-a7e394dba332,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:09:52.859957 containerd[1554]: time="2025-11-24T00:09:52.859934194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxlsg,Uid:84972b9a-587c-4cc3-993d-8f4d81fe7493,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:52.869393 containerd[1554]: time="2025-11-24T00:09:52.869355913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-t4ct5,Uid:b92dcaad-cbde-40da-94a7-6e0bac08ac02,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:09:52.879012 containerd[1554]: time="2025-11-24T00:09:52.878761093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-555b4c874c-4kfgm,Uid:79029465-26e2-4032-b64d-59a7fac9f008,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:52.885253 containerd[1554]: time="2025-11-24T00:09:52.885220817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db54fb577-sf2vl,Uid:6278fcff-f964-4650-8bd0-1fe609bb44a0,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:52.892760 kubelet[2727]: E1124 00:09:52.892633 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:52.895214 containerd[1554]: time="2025-11-24T00:09:52.894673536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zkl6,Uid:d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb,Namespace:kube-system,Attempt:0,}" Nov 24 00:09:52.974789 containerd[1554]: time="2025-11-24T00:09:52.974652294Z" level=error msg="Failed to destroy network for sandbox \"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:52.980626 containerd[1554]: time="2025-11-24T00:09:52.980589285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2zgd,Uid:0b32a61f-382f-4e7f-bc9f-f8456926fdc1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:52.981207 kubelet[2727]: E1124 00:09:52.981128 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:52.981207 kubelet[2727]: E1124 00:09:52.981214 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f2zgd" Nov 24 00:09:52.981504 kubelet[2727]: E1124 00:09:52.981238 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f2zgd" Nov 24 00:09:52.981504 kubelet[2727]: E1124 00:09:52.981305 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f2zgd_kube-system(0b32a61f-382f-4e7f-bc9f-f8456926fdc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f2zgd_kube-system(0b32a61f-382f-4e7f-bc9f-f8456926fdc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cd0d0061aad310009954ace9147132df3cda366eb1853b7ee669a5586bb9915\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f2zgd" podUID="0b32a61f-382f-4e7f-bc9f-f8456926fdc1" Nov 24 00:09:53.010308 containerd[1554]: time="2025-11-24T00:09:53.010240091Z" level=error msg="Failed to destroy network for sandbox \"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.014413 containerd[1554]: time="2025-11-24T00:09:53.014260378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxlsg,Uid:84972b9a-587c-4cc3-993d-8f4d81fe7493,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.015024 kubelet[2727]: E1124 00:09:53.014560 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.015024 kubelet[2727]: E1124 00:09:53.014884 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:53.015024 kubelet[2727]: E1124 00:09:53.014905 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bxlsg" Nov 24 00:09:53.015299 kubelet[2727]: E1124 00:09:53.014957 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e8847fe251bdc53f786ba7328254eb606b7571890743085302b97266a88c5ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:09:53.023495 containerd[1554]: time="2025-11-24T00:09:53.022802428Z" level=error msg="Failed to destroy network for sandbox \"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.024715 containerd[1554]: time="2025-11-24T00:09:53.024410089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-s8fp4,Uid:918b0245-1c27-4194-ac35-a7e394dba332,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.026073 kubelet[2727]: E1124 00:09:53.024754 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.026073 kubelet[2727]: E1124 00:09:53.024784 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" Nov 24 00:09:53.026073 kubelet[2727]: E1124 00:09:53.024830 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" Nov 24 00:09:53.026155 kubelet[2727]: E1124 00:09:53.024861 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3914ca8c64d58a922668c5e27d6919d747296e925574c627e6b24a43737d0b5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:09:53.040089 containerd[1554]: time="2025-11-24T00:09:53.040058751Z" level=error msg="Failed to destroy network for sandbox \"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.044178 systemd[1]: Created slice kubepods-besteffort-pod63922d09_5f16_43ef_bdc3_f819f707f5b0.slice - libcontainer container kubepods-besteffort-pod63922d09_5f16_43ef_bdc3_f819f707f5b0.slice. Nov 24 00:09:53.049099 containerd[1554]: time="2025-11-24T00:09:53.048960780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-555b4c874c-4kfgm,Uid:79029465-26e2-4032-b64d-59a7fac9f008,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.049756 kubelet[2727]: E1124 00:09:53.049449 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.049756 kubelet[2727]: E1124 00:09:53.049636 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" Nov 24 00:09:53.049756 kubelet[2727]: E1124 00:09:53.049655 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" Nov 24 00:09:53.049942 kubelet[2727]: E1124 00:09:53.049886 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cefca4e008ab42050c64db91f4e473341a68e66fe3dcd866da88301ee8134dd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:09:53.051380 containerd[1554]: time="2025-11-24T00:09:53.051059456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r4dwf,Uid:63922d09-5f16-43ef-bdc3-f819f707f5b0,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:53.071216 containerd[1554]: time="2025-11-24T00:09:53.070761715Z" level=error msg="Failed to destroy network for sandbox \"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.072355 containerd[1554]: time="2025-11-24T00:09:53.072196761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db54fb577-sf2vl,Uid:6278fcff-f964-4650-8bd0-1fe609bb44a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.072776 kubelet[2727]: E1124 00:09:53.072738 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.072836 kubelet[2727]: E1124 00:09:53.072797 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-db54fb577-sf2vl" Nov 24 00:09:53.072836 kubelet[2727]: E1124 00:09:53.072824 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-db54fb577-sf2vl" Nov 24 00:09:53.072918 kubelet[2727]: E1124 00:09:53.072879 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-db54fb577-sf2vl_calico-system(6278fcff-f964-4650-8bd0-1fe609bb44a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-db54fb577-sf2vl_calico-system(6278fcff-f964-4650-8bd0-1fe609bb44a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17a8219be735523dbaec2a0c7940c4744df4f44877d98434c4ea44ad3cc65e3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-db54fb577-sf2vl" podUID="6278fcff-f964-4650-8bd0-1fe609bb44a0" Nov 24 00:09:53.074584 containerd[1554]: time="2025-11-24T00:09:53.074550159Z" level=error msg="Failed to destroy network for sandbox \"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.075444 containerd[1554]: time="2025-11-24T00:09:53.075394933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zkl6,Uid:d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.076063 kubelet[2727]: E1124 00:09:53.076008 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.076063 kubelet[2727]: E1124 00:09:53.076048 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8zkl6" Nov 24 00:09:53.076159 kubelet[2727]: E1124 00:09:53.076063 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8zkl6" Nov 24 00:09:53.076417 kubelet[2727]: E1124 00:09:53.076290 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8zkl6_kube-system(d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8zkl6_kube-system(d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0505525bb0d5b0040e6c8ff0b20274c61fd81aae3ff3e86adb6cb8f9a1b0d64d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8zkl6" podUID="d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb" Nov 24 00:09:53.081044 containerd[1554]: time="2025-11-24T00:09:53.081013822Z" level=error msg="Failed to destroy network for sandbox \"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.082307 containerd[1554]: time="2025-11-24T00:09:53.081823807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-t4ct5,Uid:b92dcaad-cbde-40da-94a7-6e0bac08ac02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.082419 kubelet[2727]: E1124 00:09:53.082001 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.082419 kubelet[2727]: E1124 00:09:53.082026 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" Nov 24 00:09:53.082419 kubelet[2727]: E1124 00:09:53.082082 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" Nov 24 00:09:53.083571 kubelet[2727]: E1124 00:09:53.082171 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56ed2ba646a9cb0b5903d4e227f727313aaa867bad95a8f50d1ad86547ebf64a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:09:53.117836 containerd[1554]: time="2025-11-24T00:09:53.117719242Z" level=error msg="Failed to destroy network for sandbox \"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.119724 containerd[1554]: time="2025-11-24T00:09:53.119687022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r4dwf,Uid:63922d09-5f16-43ef-bdc3-f819f707f5b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.120550 kubelet[2727]: E1124 00:09:53.119864 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:09:53.120550 kubelet[2727]: E1124 00:09:53.119896 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:53.120550 kubelet[2727]: E1124 00:09:53.119913 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r4dwf" Nov 24 00:09:53.120663 kubelet[2727]: E1124 00:09:53.119947 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8bd59b54349cc99e1b8bf32c43f4b95f3881688a70fe9e20fedc8e4a77fd7d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:09:53.142949 kubelet[2727]: E1124 00:09:53.142903 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:53.145650 containerd[1554]: time="2025-11-24T00:09:53.145594312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:09:55.021211 kubelet[2727]: I1124 00:09:55.019897 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:09:55.021211 kubelet[2727]: E1124 00:09:55.020782 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:55.151767 kubelet[2727]: E1124 00:09:55.151440 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:56.734662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622397889.mount: Deactivated successfully. Nov 24 00:09:56.764035 containerd[1554]: time="2025-11-24T00:09:56.763441366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:56.764035 containerd[1554]: time="2025-11-24T00:09:56.764011440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:09:56.764533 containerd[1554]: time="2025-11-24T00:09:56.764507667Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:56.765646 containerd[1554]: time="2025-11-24T00:09:56.765605678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:09:56.766189 containerd[1554]: time="2025-11-24T00:09:56.766168132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.620515453s" Nov 24 00:09:56.766284 containerd[1554]: time="2025-11-24T00:09:56.766261690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:09:56.796493 containerd[1554]: time="2025-11-24T00:09:56.796423641Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:09:56.803780 containerd[1554]: time="2025-11-24T00:09:56.803752375Z" level=info msg="Container 14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:09:56.812170 containerd[1554]: time="2025-11-24T00:09:56.811943845Z" level=info msg="CreateContainer within sandbox \"c904333a07acbef76db49f21d1cfc2269eea5d72877612b607d073aea5e91064\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c\"" Nov 24 00:09:56.812655 containerd[1554]: time="2025-11-24T00:09:56.812627757Z" level=info msg="StartContainer for \"14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c\"" Nov 24 00:09:56.814288 containerd[1554]: time="2025-11-24T00:09:56.814260683Z" level=info msg="connecting to shim 14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c" address="unix:///run/containerd/s/1b7f655eb3f4880d88b0bdbb2c1f4a052d25d3b743b758b523099da5fc90d08f" protocol=ttrpc version=3 Nov 24 00:09:56.857600 systemd[1]: Started cri-containerd-14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c.scope - libcontainer container 14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c. Nov 24 00:09:56.941769 containerd[1554]: time="2025-11-24T00:09:56.941715986Z" level=info msg="StartContainer for \"14aa5ed52f727e606cae272fc9c33ecb55c28a7dadde78b770afb7874db2cc3c\" returns successfully" Nov 24 00:09:57.022802 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:09:57.022931 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:09:57.150623 kubelet[2727]: I1124 00:09:57.149591 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-ca-bundle\") pod \"6278fcff-f964-4650-8bd0-1fe609bb44a0\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " Nov 24 00:09:57.150623 kubelet[2727]: I1124 00:09:57.149624 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-backend-key-pair\") pod \"6278fcff-f964-4650-8bd0-1fe609bb44a0\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " Nov 24 00:09:57.150623 kubelet[2727]: I1124 00:09:57.149676 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwx7m\" (UniqueName: \"kubernetes.io/projected/6278fcff-f964-4650-8bd0-1fe609bb44a0-kube-api-access-xwx7m\") pod \"6278fcff-f964-4650-8bd0-1fe609bb44a0\" (UID: \"6278fcff-f964-4650-8bd0-1fe609bb44a0\") " Nov 24 00:09:57.150623 kubelet[2727]: I1124 00:09:57.150107 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6278fcff-f964-4650-8bd0-1fe609bb44a0" (UID: "6278fcff-f964-4650-8bd0-1fe609bb44a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:09:57.161650 kubelet[2727]: I1124 00:09:57.161365 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6278fcff-f964-4650-8bd0-1fe609bb44a0-kube-api-access-xwx7m" (OuterVolumeSpecName: "kube-api-access-xwx7m") pod "6278fcff-f964-4650-8bd0-1fe609bb44a0" (UID: "6278fcff-f964-4650-8bd0-1fe609bb44a0"). InnerVolumeSpecName "kube-api-access-xwx7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:09:57.164010 kubelet[2727]: I1124 00:09:57.163969 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6278fcff-f964-4650-8bd0-1fe609bb44a0" (UID: "6278fcff-f964-4650-8bd0-1fe609bb44a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:09:57.165928 kubelet[2727]: E1124 00:09:57.165892 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:57.185840 kubelet[2727]: I1124 00:09:57.185748 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xhw89" podStartSLOduration=1.818940305 podStartE2EDuration="11.185738083s" podCreationTimestamp="2025-11-24 00:09:46 +0000 UTC" firstStartedPulling="2025-11-24 00:09:47.400320399 +0000 UTC m=+18.462191143" lastFinishedPulling="2025-11-24 00:09:56.767118177 +0000 UTC m=+27.828988921" observedRunningTime="2025-11-24 00:09:57.185339393 +0000 UTC m=+28.247210137" watchObservedRunningTime="2025-11-24 00:09:57.185738083 +0000 UTC m=+28.247608837" Nov 24 00:09:57.251115 kubelet[2727]: I1124 00:09:57.250747 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-ca-bundle\") on node \"172-237-134-153\" DevicePath \"\"" Nov 24 00:09:57.251115 kubelet[2727]: I1124 00:09:57.250827 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6278fcff-f964-4650-8bd0-1fe609bb44a0-whisker-backend-key-pair\") on node \"172-237-134-153\" DevicePath \"\"" Nov 24 00:09:57.251115 kubelet[2727]: I1124 00:09:57.250838 2727 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwx7m\" (UniqueName: \"kubernetes.io/projected/6278fcff-f964-4650-8bd0-1fe609bb44a0-kube-api-access-xwx7m\") on node \"172-237-134-153\" DevicePath \"\"" Nov 24 00:09:57.468735 systemd[1]: Removed slice kubepods-besteffort-pod6278fcff_f964_4650_8bd0_1fe609bb44a0.slice - libcontainer container kubepods-besteffort-pod6278fcff_f964_4650_8bd0_1fe609bb44a0.slice. Nov 24 00:09:57.517492 systemd[1]: Created slice kubepods-besteffort-pod2a030f5f_0015_44b8_b116_a472da00a019.slice - libcontainer container kubepods-besteffort-pod2a030f5f_0015_44b8_b116_a472da00a019.slice. Nov 24 00:09:57.553212 kubelet[2727]: I1124 00:09:57.553136 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a030f5f-0015-44b8-b116-a472da00a019-whisker-ca-bundle\") pod \"whisker-58b465668f-57pbc\" (UID: \"2a030f5f-0015-44b8-b116-a472da00a019\") " pod="calico-system/whisker-58b465668f-57pbc" Nov 24 00:09:57.553212 kubelet[2727]: I1124 00:09:57.553222 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a030f5f-0015-44b8-b116-a472da00a019-whisker-backend-key-pair\") pod \"whisker-58b465668f-57pbc\" (UID: \"2a030f5f-0015-44b8-b116-a472da00a019\") " pod="calico-system/whisker-58b465668f-57pbc" Nov 24 00:09:57.553446 kubelet[2727]: I1124 00:09:57.553247 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zqb\" (UniqueName: \"kubernetes.io/projected/2a030f5f-0015-44b8-b116-a472da00a019-kube-api-access-b5zqb\") pod \"whisker-58b465668f-57pbc\" (UID: \"2a030f5f-0015-44b8-b116-a472da00a019\") " pod="calico-system/whisker-58b465668f-57pbc" Nov 24 00:09:57.736541 systemd[1]: var-lib-kubelet-pods-6278fcff\x2df964\x2d4650\x2d8bd0\x2d1fe609bb44a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwx7m.mount: Deactivated successfully. Nov 24 00:09:57.739689 systemd[1]: var-lib-kubelet-pods-6278fcff\x2df964\x2d4650\x2d8bd0\x2d1fe609bb44a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:09:57.824718 containerd[1554]: time="2025-11-24T00:09:57.824673691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b465668f-57pbc,Uid:2a030f5f-0015-44b8-b116-a472da00a019,Namespace:calico-system,Attempt:0,}" Nov 24 00:09:57.955027 systemd-networkd[1430]: cali5969a3bf583: Link UP Nov 24 00:09:57.955255 systemd-networkd[1430]: cali5969a3bf583: Gained carrier Nov 24 00:09:57.969648 containerd[1554]: 2025-11-24 00:09:57.851 [INFO][3867] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:09:57.969648 containerd[1554]: 2025-11-24 00:09:57.891 [INFO][3867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0 whisker-58b465668f- calico-system 2a030f5f-0015-44b8-b116-a472da00a019 912 0 2025-11-24 00:09:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58b465668f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-134-153 whisker-58b465668f-57pbc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5969a3bf583 [] [] }} ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-" Nov 24 00:09:57.969648 containerd[1554]: 2025-11-24 00:09:57.891 [INFO][3867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.969648 containerd[1554]: 2025-11-24 00:09:57.912 [INFO][3879] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" HandleID="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Workload="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.912 [INFO][3879] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" HandleID="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Workload="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-153", "pod":"whisker-58b465668f-57pbc", "timestamp":"2025-11-24 00:09:57.912716765 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.912 [INFO][3879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.913 [INFO][3879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.913 [INFO][3879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.919 [INFO][3879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" host="172-237-134-153" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.924 [INFO][3879] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.927 [INFO][3879] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.928 [INFO][3879] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.930 [INFO][3879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:09:57.969827 containerd[1554]: 2025-11-24 00:09:57.930 [INFO][3879] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" host="172-237-134-153" Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.931 [INFO][3879] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512 Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.934 [INFO][3879] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" host="172-237-134-153" Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.938 [INFO][3879] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.1/26] block=192.168.57.0/26 handle="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" host="172-237-134-153" Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.938 [INFO][3879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.1/26] handle="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" host="172-237-134-153" Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.938 [INFO][3879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:09:57.970030 containerd[1554]: 2025-11-24 00:09:57.938 [INFO][3879] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.1/26] IPv6=[] ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" HandleID="k8s-pod-network.f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Workload="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.970142 containerd[1554]: 2025-11-24 00:09:57.941 [INFO][3867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0", GenerateName:"whisker-58b465668f-", Namespace:"calico-system", SelfLink:"", UID:"2a030f5f-0015-44b8-b116-a472da00a019", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58b465668f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"whisker-58b465668f-57pbc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5969a3bf583", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:09:57.970142 containerd[1554]: 2025-11-24 00:09:57.941 [INFO][3867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.1/32] ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.970211 containerd[1554]: 2025-11-24 00:09:57.942 [INFO][3867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5969a3bf583 ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.970211 containerd[1554]: 2025-11-24 00:09:57.955 [INFO][3867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:57.970257 containerd[1554]: 2025-11-24 00:09:57.955 [INFO][3867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0", GenerateName:"whisker-58b465668f-", Namespace:"calico-system", SelfLink:"", UID:"2a030f5f-0015-44b8-b116-a472da00a019", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58b465668f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512", Pod:"whisker-58b465668f-57pbc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5969a3bf583", MAC:"4a:c3:ae:b7:cc:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:09:57.970302 containerd[1554]: 2025-11-24 00:09:57.964 [INFO][3867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" Namespace="calico-system" Pod="whisker-58b465668f-57pbc" WorkloadEndpoint="172--237--134--153-k8s-whisker--58b465668f--57pbc-eth0" Nov 24 00:09:58.011287 containerd[1554]: time="2025-11-24T00:09:58.011186792Z" level=info msg="connecting to shim f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512" address="unix:///run/containerd/s/b6e3995dcc01c743d4e03f4d70e07c76af23c24b9874d94eec57c222dfc4e7d8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:09:58.043627 systemd[1]: Started cri-containerd-f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512.scope - libcontainer container f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512. Nov 24 00:09:58.091289 containerd[1554]: time="2025-11-24T00:09:58.091243773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58b465668f-57pbc,Uid:2a030f5f-0015-44b8-b116-a472da00a019,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0b87e93435337d90451167e80cdc2226bc7f8b81624208498b5b70613ce2512\"" Nov 24 00:09:58.093669 containerd[1554]: time="2025-11-24T00:09:58.093486897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:09:58.165932 kubelet[2727]: I1124 00:09:58.165912 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:09:58.167022 kubelet[2727]: E1124 00:09:58.166807 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:09:58.241615 containerd[1554]: time="2025-11-24T00:09:58.241514839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:58.242601 containerd[1554]: time="2025-11-24T00:09:58.242561673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:09:58.242699 containerd[1554]: time="2025-11-24T00:09:58.242651790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:09:58.242924 kubelet[2727]: E1124 00:09:58.242797 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:09:58.242924 kubelet[2727]: E1124 00:09:58.242824 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:09:58.245858 kubelet[2727]: E1124 00:09:58.245783 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29e504daf9ac4811b8d5b3cd1c6c6483,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:58.251556 containerd[1554]: time="2025-11-24T00:09:58.251441493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:09:58.386369 containerd[1554]: time="2025-11-24T00:09:58.386144793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:58.387610 containerd[1554]: time="2025-11-24T00:09:58.387484380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:09:58.387610 containerd[1554]: time="2025-11-24T00:09:58.387577018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:09:58.387838 kubelet[2727]: E1124 00:09:58.387794 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:09:58.387903 kubelet[2727]: E1124 00:09:58.387856 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:09:58.388045 kubelet[2727]: E1124 00:09:58.387986 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:58.389258 kubelet[2727]: E1124 00:09:58.389222 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:09:58.910605 systemd-networkd[1430]: vxlan.calico: Link UP Nov 24 00:09:58.910615 systemd-networkd[1430]: vxlan.calico: Gained carrier Nov 24 00:09:59.037493 kubelet[2727]: I1124 00:09:59.036965 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6278fcff-f964-4650-8bd0-1fe609bb44a0" path="/var/lib/kubelet/pods/6278fcff-f964-4650-8bd0-1fe609bb44a0/volumes" Nov 24 00:09:59.170677 kubelet[2727]: E1124 00:09:59.170410 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:09:59.489733 systemd-networkd[1430]: cali5969a3bf583: Gained IPv6LL Nov 24 00:10:00.065721 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Nov 24 00:10:04.034356 containerd[1554]: time="2025-11-24T00:10:04.034257093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-t4ct5,Uid:b92dcaad-cbde-40da-94a7-6e0bac08ac02,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:10:04.034356 containerd[1554]: time="2025-11-24T00:10:04.034283952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-555b4c874c-4kfgm,Uid:79029465-26e2-4032-b64d-59a7fac9f008,Namespace:calico-system,Attempt:0,}" Nov 24 00:10:04.168847 systemd-networkd[1430]: cali86d13d75cbe: Link UP Nov 24 00:10:04.171535 systemd-networkd[1430]: cali86d13d75cbe: Gained carrier Nov 24 00:10:04.187452 containerd[1554]: 2025-11-24 00:10:04.082 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0 calico-kube-controllers-555b4c874c- calico-system 79029465-26e2-4032-b64d-59a7fac9f008 831 0 2025-11-24 00:09:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:555b4c874c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-134-153 calico-kube-controllers-555b4c874c-4kfgm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86d13d75cbe [] [] }} ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-" Nov 24 00:10:04.187452 containerd[1554]: 2025-11-24 00:10:04.082 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.187452 containerd[1554]: 2025-11-24 00:10:04.123 [INFO][4161] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" HandleID="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Workload="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.125 [INFO][4161] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" HandleID="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Workload="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-153", "pod":"calico-kube-controllers-555b4c874c-4kfgm", "timestamp":"2025-11-24 00:10:04.12360037 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.125 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.125 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.125 [INFO][4161] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.133 [INFO][4161] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" host="172-237-134-153" Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.137 [INFO][4161] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.141 [INFO][4161] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.142 [INFO][4161] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.188132 containerd[1554]: 2025-11-24 00:10:04.144 [INFO][4161] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.144 [INFO][4161] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" host="172-237-134-153" Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.146 [INFO][4161] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768 Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.149 [INFO][4161] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" host="172-237-134-153" Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4161] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.2/26] block=192.168.57.0/26 handle="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" host="172-237-134-153" Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4161] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.2/26] handle="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" host="172-237-134-153" Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:04.188334 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4161] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.2/26] IPv6=[] ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" HandleID="k8s-pod-network.a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Workload="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.188571 containerd[1554]: 2025-11-24 00:10:04.159 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0", GenerateName:"calico-kube-controllers-555b4c874c-", Namespace:"calico-system", SelfLink:"", UID:"79029465-26e2-4032-b64d-59a7fac9f008", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"555b4c874c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"calico-kube-controllers-555b4c874c-4kfgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86d13d75cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:04.188628 containerd[1554]: 2025-11-24 00:10:04.160 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.2/32] ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.188628 containerd[1554]: 2025-11-24 00:10:04.160 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86d13d75cbe ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.188628 containerd[1554]: 2025-11-24 00:10:04.172 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.188695 containerd[1554]: 2025-11-24 00:10:04.173 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0", GenerateName:"calico-kube-controllers-555b4c874c-", Namespace:"calico-system", SelfLink:"", UID:"79029465-26e2-4032-b64d-59a7fac9f008", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"555b4c874c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768", Pod:"calico-kube-controllers-555b4c874c-4kfgm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86d13d75cbe", MAC:"96:46:17:00:09:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:04.188745 containerd[1554]: 2025-11-24 00:10:04.184 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" Namespace="calico-system" Pod="calico-kube-controllers-555b4c874c-4kfgm" WorkloadEndpoint="172--237--134--153-k8s-calico--kube--controllers--555b4c874c--4kfgm-eth0" Nov 24 00:10:04.209996 containerd[1554]: time="2025-11-24T00:10:04.209934726Z" level=info msg="connecting to shim a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768" address="unix:///run/containerd/s/479affb53c3a0544366a568f34f7cf427839dd3660cdcde46c860001ab15584a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:04.244760 systemd[1]: Started cri-containerd-a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768.scope - libcontainer container a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768. Nov 24 00:10:04.278600 systemd-networkd[1430]: cali82ab8b3f406: Link UP Nov 24 00:10:04.280032 systemd-networkd[1430]: cali82ab8b3f406: Gained carrier Nov 24 00:10:04.302105 containerd[1554]: 2025-11-24 00:10:04.087 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0 calico-apiserver-556645b45d- calico-apiserver b92dcaad-cbde-40da-94a7-6e0bac08ac02 829 0 2025-11-24 00:09:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556645b45d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-134-153 calico-apiserver-556645b45d-t4ct5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82ab8b3f406 [] [] }} ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-" Nov 24 00:10:04.302105 containerd[1554]: 2025-11-24 00:10:04.087 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.302105 containerd[1554]: 2025-11-24 00:10:04.139 [INFO][4166] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" HandleID="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.140 [INFO][4166] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" HandleID="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-134-153", "pod":"calico-apiserver-556645b45d-t4ct5", "timestamp":"2025-11-24 00:10:04.139935145 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.140 [INFO][4166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.154 [INFO][4166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.239 [INFO][4166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" host="172-237-134-153" Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.249 [INFO][4166] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.254 [INFO][4166] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.256 [INFO][4166] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.302276 containerd[1554]: 2025-11-24 00:10:04.258 [INFO][4166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.258 [INFO][4166] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" host="172-237-134-153" Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.259 [INFO][4166] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.263 [INFO][4166] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" host="172-237-134-153" Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.272 [INFO][4166] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.3/26] block=192.168.57.0/26 handle="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" host="172-237-134-153" Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.272 [INFO][4166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.3/26] handle="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" host="172-237-134-153" Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.272 [INFO][4166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:04.302841 containerd[1554]: 2025-11-24 00:10:04.272 [INFO][4166] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.3/26] IPv6=[] ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" HandleID="k8s-pod-network.e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.302965 containerd[1554]: 2025-11-24 00:10:04.275 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0", GenerateName:"calico-apiserver-556645b45d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b92dcaad-cbde-40da-94a7-6e0bac08ac02", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556645b45d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"calico-apiserver-556645b45d-t4ct5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82ab8b3f406", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:04.303018 containerd[1554]: 2025-11-24 00:10:04.275 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.3/32] ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.303018 containerd[1554]: 2025-11-24 00:10:04.275 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82ab8b3f406 ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.303018 containerd[1554]: 2025-11-24 00:10:04.282 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.303086 containerd[1554]: 2025-11-24 00:10:04.283 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0", GenerateName:"calico-apiserver-556645b45d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b92dcaad-cbde-40da-94a7-6e0bac08ac02", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556645b45d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a", Pod:"calico-apiserver-556645b45d-t4ct5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82ab8b3f406", MAC:"92:5c:3f:f9:b4:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:04.303135 containerd[1554]: 2025-11-24 00:10:04.296 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-t4ct5" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--t4ct5-eth0" Nov 24 00:10:04.329191 containerd[1554]: time="2025-11-24T00:10:04.328849927Z" level=info msg="connecting to shim e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a" address="unix:///run/containerd/s/6a01d1715bdef8dd654ddbed51737a02e5d781ceaacb633776c44e6e30be2eef" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:04.358726 containerd[1554]: time="2025-11-24T00:10:04.358691254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-555b4c874c-4kfgm,Uid:79029465-26e2-4032-b64d-59a7fac9f008,Namespace:calico-system,Attempt:0,} returns sandbox id \"a93e05b86aad562c55a972e8bb2b989e43cfd42998776dc1db051a6454101768\"" Nov 24 00:10:04.362508 containerd[1554]: time="2025-11-24T00:10:04.361735154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:10:04.365181 systemd[1]: Started cri-containerd-e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a.scope - libcontainer container e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a. Nov 24 00:10:04.418569 containerd[1554]: time="2025-11-24T00:10:04.418435059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-t4ct5,Uid:b92dcaad-cbde-40da-94a7-6e0bac08ac02,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e5a75c4e356c94b8201c9a28d81d883983387aff04ac8fc4115e782c7ea2f91a\"" Nov 24 00:10:04.496051 containerd[1554]: time="2025-11-24T00:10:04.496019839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:04.496867 containerd[1554]: time="2025-11-24T00:10:04.496815753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:10:04.496867 containerd[1554]: time="2025-11-24T00:10:04.496836343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:04.497025 kubelet[2727]: E1124 00:10:04.496963 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:04.497025 kubelet[2727]: E1124 00:10:04.496999 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:04.497414 containerd[1554]: time="2025-11-24T00:10:04.497171956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:04.497641 kubelet[2727]: E1124 00:10:04.497428 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wvch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:04.499148 kubelet[2727]: E1124 00:10:04.499047 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:04.625993 containerd[1554]: time="2025-11-24T00:10:04.624983860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:04.626495 containerd[1554]: time="2025-11-24T00:10:04.626444881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:04.626558 containerd[1554]: time="2025-11-24T00:10:04.626535889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:04.626741 kubelet[2727]: E1124 00:10:04.626692 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:04.626782 kubelet[2727]: E1124 00:10:04.626753 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:04.627129 kubelet[2727]: E1124 00:10:04.627082 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktsbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:04.628251 kubelet[2727]: E1124 00:10:04.628228 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:05.034641 containerd[1554]: time="2025-11-24T00:10:05.034586714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-s8fp4,Uid:918b0245-1c27-4194-ac35-a7e394dba332,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:10:05.035223 containerd[1554]: time="2025-11-24T00:10:05.034619773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxlsg,Uid:84972b9a-587c-4cc3-993d-8f4d81fe7493,Namespace:calico-system,Attempt:0,}" Nov 24 00:10:05.148280 systemd-networkd[1430]: cali43915aedac0: Link UP Nov 24 00:10:05.148596 systemd-networkd[1430]: cali43915aedac0: Gained carrier Nov 24 00:10:05.163043 containerd[1554]: 2025-11-24 00:10:05.083 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0 goldmane-666569f655- calico-system 84972b9a-587c-4cc3-993d-8f4d81fe7493 834 0 2025-11-24 00:09:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-134-153 goldmane-666569f655-bxlsg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali43915aedac0 [] [] }} ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-" Nov 24 00:10:05.163043 containerd[1554]: 2025-11-24 00:10:05.083 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163043 containerd[1554]: 2025-11-24 00:10:05.112 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" HandleID="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Workload="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.113 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" HandleID="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Workload="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5180), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-153", "pod":"goldmane-666569f655-bxlsg", "timestamp":"2025-11-24 00:10:05.11295554 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.113 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.113 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.113 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.119 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" host="172-237-134-153" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.122 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.126 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.127 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.129 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.163205 containerd[1554]: 2025-11-24 00:10:05.129 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" host="172-237-134-153" Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.130 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0 Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.134 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" host="172-237-134-153" Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.4/26] block=192.168.57.0/26 handle="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" host="172-237-134-153" Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.4/26] handle="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" host="172-237-134-153" Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:05.163407 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.4/26] IPv6=[] ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" HandleID="k8s-pod-network.8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Workload="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163598 containerd[1554]: 2025-11-24 00:10:05.141 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"84972b9a-587c-4cc3-993d-8f4d81fe7493", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"goldmane-666569f655-bxlsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43915aedac0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:05.163598 containerd[1554]: 2025-11-24 00:10:05.141 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.4/32] ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163676 containerd[1554]: 2025-11-24 00:10:05.141 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43915aedac0 ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163676 containerd[1554]: 2025-11-24 00:10:05.149 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.163718 containerd[1554]: 2025-11-24 00:10:05.150 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"84972b9a-587c-4cc3-993d-8f4d81fe7493", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0", Pod:"goldmane-666569f655-bxlsg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43915aedac0", MAC:"36:89:0d:cc:53:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:05.163769 containerd[1554]: 2025-11-24 00:10:05.161 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" Namespace="calico-system" Pod="goldmane-666569f655-bxlsg" WorkloadEndpoint="172--237--134--153-k8s-goldmane--666569f655--bxlsg-eth0" Nov 24 00:10:05.188935 kubelet[2727]: E1124 00:10:05.188770 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:05.190410 kubelet[2727]: E1124 00:10:05.190353 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:05.204196 containerd[1554]: time="2025-11-24T00:10:05.204141959Z" level=info msg="connecting to shim 8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0" address="unix:///run/containerd/s/1c6958252ba1c7eb0928ae71a4a4618861cab6c00261dd8576b961c74476853e" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:05.243698 systemd[1]: Started cri-containerd-8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0.scope - libcontainer container 8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0. Nov 24 00:10:05.275834 systemd-networkd[1430]: cali17d5564d997: Link UP Nov 24 00:10:05.276928 systemd-networkd[1430]: cali17d5564d997: Gained carrier Nov 24 00:10:05.296545 containerd[1554]: 2025-11-24 00:10:05.080 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0 calico-apiserver-556645b45d- calico-apiserver 918b0245-1c27-4194-ac35-a7e394dba332 833 0 2025-11-24 00:09:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556645b45d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-134-153 calico-apiserver-556645b45d-s8fp4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali17d5564d997 [] [] }} ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-" Nov 24 00:10:05.296545 containerd[1554]: 2025-11-24 00:10:05.081 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.296545 containerd[1554]: 2025-11-24 00:10:05.122 [INFO][4316] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" HandleID="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.122 [INFO][4316] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" HandleID="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-134-153", "pod":"calico-apiserver-556645b45d-s8fp4", "timestamp":"2025-11-24 00:10:05.122414278 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.122 [INFO][4316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.138 [INFO][4316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.221 [INFO][4316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" host="172-237-134-153" Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.233 [INFO][4316] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.252 [INFO][4316] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.254 [INFO][4316] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.296859 containerd[1554]: 2025-11-24 00:10:05.257 [INFO][4316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.257 [INFO][4316] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" host="172-237-134-153" Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.258 [INFO][4316] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14 Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.262 [INFO][4316] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" host="172-237-134-153" Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.268 [INFO][4316] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.5/26] block=192.168.57.0/26 handle="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" host="172-237-134-153" Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.268 [INFO][4316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.5/26] handle="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" host="172-237-134-153" Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.268 [INFO][4316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:05.297113 containerd[1554]: 2025-11-24 00:10:05.268 [INFO][4316] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.5/26] IPv6=[] ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" HandleID="k8s-pod-network.c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Workload="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.297287 containerd[1554]: 2025-11-24 00:10:05.272 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0", GenerateName:"calico-apiserver-556645b45d-", Namespace:"calico-apiserver", SelfLink:"", UID:"918b0245-1c27-4194-ac35-a7e394dba332", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556645b45d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"calico-apiserver-556645b45d-s8fp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17d5564d997", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:05.297447 containerd[1554]: 2025-11-24 00:10:05.273 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.5/32] ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.297447 containerd[1554]: 2025-11-24 00:10:05.273 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17d5564d997 ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.297447 containerd[1554]: 2025-11-24 00:10:05.277 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.297577 containerd[1554]: 2025-11-24 00:10:05.278 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0", GenerateName:"calico-apiserver-556645b45d-", Namespace:"calico-apiserver", SelfLink:"", UID:"918b0245-1c27-4194-ac35-a7e394dba332", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556645b45d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14", Pod:"calico-apiserver-556645b45d-s8fp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali17d5564d997", MAC:"6a:95:a6:b0:84:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:05.297650 containerd[1554]: 2025-11-24 00:10:05.284 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" Namespace="calico-apiserver" Pod="calico-apiserver-556645b45d-s8fp4" WorkloadEndpoint="172--237--134--153-k8s-calico--apiserver--556645b45d--s8fp4-eth0" Nov 24 00:10:05.316613 containerd[1554]: time="2025-11-24T00:10:05.316579610Z" level=info msg="connecting to shim c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14" address="unix:///run/containerd/s/f0b65012a1b1bdc78caf1e5c00b920b907ea35081cbddc8d41ca4cca771fa8cb" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:05.342841 systemd[1]: Started cri-containerd-c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14.scope - libcontainer container c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14. Nov 24 00:10:05.365351 containerd[1554]: time="2025-11-24T00:10:05.365312585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bxlsg,Uid:84972b9a-587c-4cc3-993d-8f4d81fe7493,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d50ae035379fc220bdd02ef395da1964889a7ceefa0c39b45c66fb64b37abd0\"" Nov 24 00:10:05.367760 containerd[1554]: time="2025-11-24T00:10:05.367498013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:10:05.378545 systemd-networkd[1430]: cali86d13d75cbe: Gained IPv6LL Nov 24 00:10:05.417227 containerd[1554]: time="2025-11-24T00:10:05.417186989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556645b45d-s8fp4,Uid:918b0245-1c27-4194-ac35-a7e394dba332,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c32a253508e64e8e6dee998d95c6e2e704a2e31461d9fa8637bc283c63a91c14\"" Nov 24 00:10:05.499201 containerd[1554]: time="2025-11-24T00:10:05.499151195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:05.500096 containerd[1554]: time="2025-11-24T00:10:05.500066127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:10:05.500217 containerd[1554]: time="2025-11-24T00:10:05.500126976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:05.500276 kubelet[2727]: E1124 00:10:05.500222 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:05.500276 kubelet[2727]: E1124 00:10:05.500250 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:05.501053 kubelet[2727]: E1124 00:10:05.500392 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:05.501559 containerd[1554]: time="2025-11-24T00:10:05.500959070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:05.503088 kubelet[2727]: E1124 00:10:05.503035 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:05.634560 containerd[1554]: time="2025-11-24T00:10:05.634139043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:05.635272 containerd[1554]: time="2025-11-24T00:10:05.635246612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:05.635337 containerd[1554]: time="2025-11-24T00:10:05.635295791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:05.635435 kubelet[2727]: E1124 00:10:05.635398 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:05.635485 kubelet[2727]: E1124 00:10:05.635454 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:05.635598 kubelet[2727]: E1124 00:10:05.635557 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zw82p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:05.636807 kubelet[2727]: E1124 00:10:05.636780 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:05.697651 systemd-networkd[1430]: cali82ab8b3f406: Gained IPv6LL Nov 24 00:10:06.034271 kubelet[2727]: E1124 00:10:06.034087 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:06.034624 containerd[1554]: time="2025-11-24T00:10:06.034597356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zkl6,Uid:d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb,Namespace:kube-system,Attempt:0,}" Nov 24 00:10:06.035208 containerd[1554]: time="2025-11-24T00:10:06.035189195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r4dwf,Uid:63922d09-5f16-43ef-bdc3-f819f707f5b0,Namespace:calico-system,Attempt:0,}" Nov 24 00:10:06.153798 systemd-networkd[1430]: caliaea97690cbe: Link UP Nov 24 00:10:06.155129 systemd-networkd[1430]: caliaea97690cbe: Gained carrier Nov 24 00:10:06.168405 containerd[1554]: 2025-11-24 00:10:06.085 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0 coredns-674b8bbfcf- kube-system d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb 835 0 2025-11-24 00:09:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-134-153 coredns-674b8bbfcf-8zkl6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaea97690cbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-" Nov 24 00:10:06.168405 containerd[1554]: 2025-11-24 00:10:06.085 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.168405 containerd[1554]: 2025-11-24 00:10:06.117 [INFO][4469] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" HandleID="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.117 [INFO][4469] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" HandleID="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5200), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-134-153", "pod":"coredns-674b8bbfcf-8zkl6", "timestamp":"2025-11-24 00:10:06.11719503 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.118 [INFO][4469] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.118 [INFO][4469] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.118 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.124 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" host="172-237-134-153" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.127 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.131 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.133 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.135 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.168593 containerd[1554]: 2025-11-24 00:10:06.135 [INFO][4469] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" host="172-237-134-153" Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.136 [INFO][4469] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257 Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.139 [INFO][4469] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" host="172-237-134-153" Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4469] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.6/26] block=192.168.57.0/26 handle="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" host="172-237-134-153" Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.6/26] handle="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" host="172-237-134-153" Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4469] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:06.169090 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4469] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.6/26] IPv6=[] ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" HandleID="k8s-pod-network.4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.147 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"coredns-674b8bbfcf-8zkl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaea97690cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.148 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.6/32] ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.148 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaea97690cbe ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.155 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.157 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257", Pod:"coredns-674b8bbfcf-8zkl6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaea97690cbe", MAC:"c2:b3:0c:76:f9:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:06.169207 containerd[1554]: 2025-11-24 00:10:06.164 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" Namespace="kube-system" Pod="coredns-674b8bbfcf-8zkl6" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--8zkl6-eth0" Nov 24 00:10:06.187797 containerd[1554]: time="2025-11-24T00:10:06.187738448Z" level=info msg="connecting to shim 4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257" address="unix:///run/containerd/s/7ed2fc51a36f019c699fcdff4ab89599a099715bc9c5b5c6035e4e2f6589a894" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:06.199318 kubelet[2727]: E1124 00:10:06.198862 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:06.205432 kubelet[2727]: E1124 00:10:06.205358 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:06.207685 kubelet[2727]: E1124 00:10:06.205752 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:06.209384 kubelet[2727]: E1124 00:10:06.209187 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:06.237586 systemd[1]: Started cri-containerd-4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257.scope - libcontainer container 4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257. Nov 24 00:10:06.305601 systemd-networkd[1430]: caliccd155a4519: Link UP Nov 24 00:10:06.307530 systemd-networkd[1430]: caliccd155a4519: Gained carrier Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.084 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-csi--node--driver--r4dwf-eth0 csi-node-driver- calico-system 63922d09-5f16-43ef-bdc3-f819f707f5b0 729 0 2025-11-24 00:09:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-134-153 csi-node-driver-r4dwf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliccd155a4519 [] [] }} ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.084 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.118 [INFO][4467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" HandleID="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Workload="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.119 [INFO][4467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" HandleID="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Workload="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-134-153", "pod":"csi-node-driver-r4dwf", "timestamp":"2025-11-24 00:10:06.118834119 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.119 [INFO][4467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.143 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.238 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.257 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.264 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.267 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.271 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.271 [INFO][4467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.273 [INFO][4467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2 Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.277 [INFO][4467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.283 [INFO][4467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.7/26] block=192.168.57.0/26 handle="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.284 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.7/26] handle="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" host="172-237-134-153" Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.284 [INFO][4467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:06.329530 containerd[1554]: 2025-11-24 00:10:06.284 [INFO][4467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.7/26] IPv6=[] ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" HandleID="k8s-pod-network.760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Workload="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.289 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-csi--node--driver--r4dwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63922d09-5f16-43ef-bdc3-f819f707f5b0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"csi-node-driver-r4dwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd155a4519", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.289 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.7/32] ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.291 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd155a4519 ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.309 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.312 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-csi--node--driver--r4dwf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63922d09-5f16-43ef-bdc3-f819f707f5b0", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2", Pod:"csi-node-driver-r4dwf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd155a4519", MAC:"c6:df:d6:8c:fd:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:06.329973 containerd[1554]: 2025-11-24 00:10:06.321 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" Namespace="calico-system" Pod="csi-node-driver-r4dwf" WorkloadEndpoint="172--237--134--153-k8s-csi--node--driver--r4dwf-eth0" Nov 24 00:10:06.370061 containerd[1554]: time="2025-11-24T00:10:06.370030318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8zkl6,Uid:d8a094f4-f693-4b7a-a5c9-e53b2fb52dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257\"" Nov 24 00:10:06.371823 containerd[1554]: time="2025-11-24T00:10:06.371531251Z" level=info msg="connecting to shim 760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2" address="unix:///run/containerd/s/ff63e3e318f1776b5e5197c78417def86dd2deb17e702cb58755c493853da0b5" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:06.374084 kubelet[2727]: E1124 00:10:06.374058 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:06.378375 containerd[1554]: time="2025-11-24T00:10:06.378350084Z" level=info msg="CreateContainer within sandbox \"4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:10:06.399456 containerd[1554]: time="2025-11-24T00:10:06.399437012Z" level=info msg="Container 6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:10:06.405070 containerd[1554]: time="2025-11-24T00:10:06.404989178Z" level=info msg="CreateContainer within sandbox \"4d5db672a1f129daa1b7bf26ed22114f633203cbf743fdd1cbc9ee7a59dde257\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5\"" Nov 24 00:10:06.408883 containerd[1554]: time="2025-11-24T00:10:06.405387441Z" level=info msg="StartContainer for \"6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5\"" Nov 24 00:10:06.408883 containerd[1554]: time="2025-11-24T00:10:06.406052869Z" level=info msg="connecting to shim 6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5" address="unix:///run/containerd/s/7ed2fc51a36f019c699fcdff4ab89599a099715bc9c5b5c6035e4e2f6589a894" protocol=ttrpc version=3 Nov 24 00:10:06.410656 systemd[1]: Started cri-containerd-760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2.scope - libcontainer container 760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2. Nov 24 00:10:06.435046 systemd[1]: Started cri-containerd-6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5.scope - libcontainer container 6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5. Nov 24 00:10:06.465633 systemd-networkd[1430]: cali43915aedac0: Gained IPv6LL Nov 24 00:10:06.490270 containerd[1554]: time="2025-11-24T00:10:06.490238333Z" level=info msg="StartContainer for \"6b2f32fdbcf0a657bd8ad5e4d8dd7b74cf813f725aa06b6bab1fc8e8ed3fe0c5\" returns successfully" Nov 24 00:10:06.497848 containerd[1554]: time="2025-11-24T00:10:06.497818402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r4dwf,Uid:63922d09-5f16-43ef-bdc3-f819f707f5b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"760dc2f67c56231381c3d9f4197b180bf5f07d40debbdffdf1c3422f23c6c4f2\"" Nov 24 00:10:06.499686 containerd[1554]: time="2025-11-24T00:10:06.499663208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:10:06.629405 containerd[1554]: time="2025-11-24T00:10:06.629276038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:06.631012 containerd[1554]: time="2025-11-24T00:10:06.630935467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:10:06.631012 containerd[1554]: time="2025-11-24T00:10:06.630989216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:10:06.631193 kubelet[2727]: E1124 00:10:06.631162 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:06.631561 kubelet[2727]: E1124 00:10:06.631208 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:06.634628 kubelet[2727]: E1124 00:10:06.634573 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:06.637001 containerd[1554]: time="2025-11-24T00:10:06.636959715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:10:06.761163 containerd[1554]: time="2025-11-24T00:10:06.761087047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:06.762380 containerd[1554]: time="2025-11-24T00:10:06.762283875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:10:06.762380 containerd[1554]: time="2025-11-24T00:10:06.762338064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:10:06.762831 kubelet[2727]: E1124 00:10:06.762625 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:06.762831 kubelet[2727]: E1124 00:10:06.762662 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:06.762831 kubelet[2727]: E1124 00:10:06.762776 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:06.764223 kubelet[2727]: E1124 00:10:06.764118 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:07.038653 kubelet[2727]: E1124 00:10:07.038613 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:07.039325 containerd[1554]: time="2025-11-24T00:10:07.039215937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2zgd,Uid:0b32a61f-382f-4e7f-bc9f-f8456926fdc1,Namespace:kube-system,Attempt:0,}" Nov 24 00:10:07.191545 systemd-networkd[1430]: calia0aeda6ef55: Link UP Nov 24 00:10:07.193133 systemd-networkd[1430]: calia0aeda6ef55: Gained carrier Nov 24 00:10:07.216035 kubelet[2727]: E1124 00:10:07.214822 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.093 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0 coredns-674b8bbfcf- kube-system 0b32a61f-382f-4e7f-bc9f-f8456926fdc1 824 0 2025-11-24 00:09:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-134-153 coredns-674b8bbfcf-f2zgd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0aeda6ef55 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.093 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.143 [INFO][4642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" HandleID="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.143 [INFO][4642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" HandleID="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f940), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-134-153", "pod":"coredns-674b8bbfcf-f2zgd", "timestamp":"2025-11-24 00:10:07.143550866 +0000 UTC"}, Hostname:"172-237-134-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.144 [INFO][4642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.144 [INFO][4642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.144 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-134-153' Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.151 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.155 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.161 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.163 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.166 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.166 [INFO][4642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.169 [INFO][4642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694 Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.173 [INFO][4642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.180 [INFO][4642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.57.8/26] block=192.168.57.0/26 handle="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.180 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.57.8/26] handle="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" host="172-237-134-153" Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.180 [INFO][4642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:10:07.218490 containerd[1554]: 2025-11-24 00:10:07.180 [INFO][4642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.57.8/26] IPv6=[] ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" HandleID="k8s-pod-network.1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Workload="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.186 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0b32a61f-382f-4e7f-bc9f-f8456926fdc1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"", Pod:"coredns-674b8bbfcf-f2zgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0aeda6ef55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.186 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.8/32] ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.186 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0aeda6ef55 ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.194 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.194 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0b32a61f-382f-4e7f-bc9f-f8456926fdc1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-134-153", ContainerID:"1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694", Pod:"coredns-674b8bbfcf-f2zgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0aeda6ef55", MAC:"3a:a8:51:2b:6a:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:10:07.218915 containerd[1554]: 2025-11-24 00:10:07.207 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" Namespace="kube-system" Pod="coredns-674b8bbfcf-f2zgd" WorkloadEndpoint="172--237--134--153-k8s-coredns--674b8bbfcf--f2zgd-eth0" Nov 24 00:10:07.223625 kubelet[2727]: E1124 00:10:07.223094 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:07.225136 kubelet[2727]: E1124 00:10:07.225114 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:07.226893 kubelet[2727]: E1124 00:10:07.226861 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:07.234704 systemd-networkd[1430]: cali17d5564d997: Gained IPv6LL Nov 24 00:10:07.235346 systemd-networkd[1430]: caliaea97690cbe: Gained IPv6LL Nov 24 00:10:07.258887 kubelet[2727]: I1124 00:10:07.258803 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8zkl6" podStartSLOduration=33.258790828 podStartE2EDuration="33.258790828s" podCreationTimestamp="2025-11-24 00:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:10:07.257869575 +0000 UTC m=+38.319740319" watchObservedRunningTime="2025-11-24 00:10:07.258790828 +0000 UTC m=+38.320661572" Nov 24 00:10:07.268120 containerd[1554]: time="2025-11-24T00:10:07.268076241Z" level=info msg="connecting to shim 1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694" address="unix:///run/containerd/s/0a1ffa4f7a00f3d4b2161888f81a77ea1942a943241fe0cb9f980126357e4ac2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:10:07.318786 systemd[1]: Started cri-containerd-1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694.scope - libcontainer container 1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694. Nov 24 00:10:07.399422 containerd[1554]: time="2025-11-24T00:10:07.399318845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2zgd,Uid:0b32a61f-382f-4e7f-bc9f-f8456926fdc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694\"" Nov 24 00:10:07.402213 kubelet[2727]: E1124 00:10:07.400748 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:07.408797 containerd[1554]: time="2025-11-24T00:10:07.407984319Z" level=info msg="CreateContainer within sandbox \"1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:10:07.423400 containerd[1554]: time="2025-11-24T00:10:07.423379601Z" level=info msg="Container 46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:10:07.426824 containerd[1554]: time="2025-11-24T00:10:07.426803189Z" level=info msg="CreateContainer within sandbox \"1d226a9eba40a4eb1f1f1485b561fe8ae7ac2dce5a12ef568509802e7386c694\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f\"" Nov 24 00:10:07.427547 containerd[1554]: time="2025-11-24T00:10:07.427489227Z" level=info msg="StartContainer for \"46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f\"" Nov 24 00:10:07.428530 containerd[1554]: time="2025-11-24T00:10:07.428492029Z" level=info msg="connecting to shim 46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f" address="unix:///run/containerd/s/0a1ffa4f7a00f3d4b2161888f81a77ea1942a943241fe0cb9f980126357e4ac2" protocol=ttrpc version=3 Nov 24 00:10:07.450617 systemd[1]: Started cri-containerd-46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f.scope - libcontainer container 46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f. Nov 24 00:10:07.502228 containerd[1554]: time="2025-11-24T00:10:07.502176580Z" level=info msg="StartContainer for \"46316cd28dafb74ecd98252e6a4347e01d045d04411e378df97b31d47ac1246f\" returns successfully" Nov 24 00:10:08.129957 systemd-networkd[1430]: caliccd155a4519: Gained IPv6LL Nov 24 00:10:08.227423 kubelet[2727]: E1124 00:10:08.226968 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:08.228359 kubelet[2727]: E1124 00:10:08.228291 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:08.231148 kubelet[2727]: E1124 00:10:08.231117 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:08.252333 kubelet[2727]: I1124 00:10:08.252179 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f2zgd" podStartSLOduration=34.252166713 podStartE2EDuration="34.252166713s" podCreationTimestamp="2025-11-24 00:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:10:08.238961734 +0000 UTC m=+39.300832478" watchObservedRunningTime="2025-11-24 00:10:08.252166713 +0000 UTC m=+39.314037457" Nov 24 00:10:08.577808 systemd-networkd[1430]: calia0aeda6ef55: Gained IPv6LL Nov 24 00:10:09.231060 kubelet[2727]: E1124 00:10:09.231024 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:09.232179 kubelet[2727]: E1124 00:10:09.232153 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:10.232126 kubelet[2727]: E1124 00:10:10.232096 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:12.038399 containerd[1554]: time="2025-11-24T00:10:12.037683939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:10:12.164102 containerd[1554]: time="2025-11-24T00:10:12.164057457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:12.164943 containerd[1554]: time="2025-11-24T00:10:12.164911524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:10:12.165014 containerd[1554]: time="2025-11-24T00:10:12.164978213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:10:12.165214 kubelet[2727]: E1124 00:10:12.165183 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:12.165710 kubelet[2727]: E1124 00:10:12.165225 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:12.165710 kubelet[2727]: E1124 00:10:12.165329 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29e504daf9ac4811b8d5b3cd1c6c6483,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:12.167603 containerd[1554]: time="2025-11-24T00:10:12.167551702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:10:12.292126 containerd[1554]: time="2025-11-24T00:10:12.291926712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:12.293168 containerd[1554]: time="2025-11-24T00:10:12.293085933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:10:12.293168 containerd[1554]: time="2025-11-24T00:10:12.293128233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:12.293625 kubelet[2727]: E1124 00:10:12.293573 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:12.293625 kubelet[2727]: E1124 00:10:12.293620 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:12.293778 kubelet[2727]: E1124 00:10:12.293737 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:12.295220 kubelet[2727]: E1124 00:10:12.295187 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:10:12.668270 kubelet[2727]: I1124 00:10:12.667730 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:10:12.669085 kubelet[2727]: E1124 00:10:12.669051 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:13.238850 kubelet[2727]: E1124 00:10:13.238811 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:17.036145 containerd[1554]: time="2025-11-24T00:10:17.035703258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:17.161058 containerd[1554]: time="2025-11-24T00:10:17.160992126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:17.162299 containerd[1554]: time="2025-11-24T00:10:17.162239608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:17.162404 containerd[1554]: time="2025-11-24T00:10:17.162268348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:17.162705 kubelet[2727]: E1124 00:10:17.162650 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:17.163662 kubelet[2727]: E1124 00:10:17.162728 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:17.163662 kubelet[2727]: E1124 00:10:17.162859 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktsbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:17.164369 kubelet[2727]: E1124 00:10:17.164324 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:20.036024 containerd[1554]: time="2025-11-24T00:10:20.035959237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:10:20.162280 containerd[1554]: time="2025-11-24T00:10:20.162218310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:20.163139 containerd[1554]: time="2025-11-24T00:10:20.163098498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:10:20.163218 containerd[1554]: time="2025-11-24T00:10:20.163181057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:20.163408 kubelet[2727]: E1124 00:10:20.163369 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:20.164222 kubelet[2727]: E1124 00:10:20.163422 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:20.164222 kubelet[2727]: E1124 00:10:20.163935 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:20.165122 kubelet[2727]: E1124 00:10:20.165084 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:21.037575 containerd[1554]: time="2025-11-24T00:10:21.037106927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:10:21.166356 containerd[1554]: time="2025-11-24T00:10:21.166308701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:21.167184 containerd[1554]: time="2025-11-24T00:10:21.167100650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:10:21.167184 containerd[1554]: time="2025-11-24T00:10:21.167152260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:21.167517 kubelet[2727]: E1124 00:10:21.167276 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:21.167517 kubelet[2727]: E1124 00:10:21.167312 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:21.167517 kubelet[2727]: E1124 00:10:21.167423 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wvch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:21.169056 kubelet[2727]: E1124 00:10:21.168640 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:23.036424 containerd[1554]: time="2025-11-24T00:10:23.036365639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:10:23.168494 containerd[1554]: time="2025-11-24T00:10:23.168011286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:23.169052 containerd[1554]: time="2025-11-24T00:10:23.169020403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:10:23.169309 containerd[1554]: time="2025-11-24T00:10:23.169116442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:10:23.170434 kubelet[2727]: E1124 00:10:23.169353 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:23.170434 kubelet[2727]: E1124 00:10:23.169529 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:23.170434 kubelet[2727]: E1124 00:10:23.170027 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:23.172925 containerd[1554]: time="2025-11-24T00:10:23.170174739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:23.310281 containerd[1554]: time="2025-11-24T00:10:23.309787026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:23.311610 containerd[1554]: time="2025-11-24T00:10:23.311538244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:23.311789 containerd[1554]: time="2025-11-24T00:10:23.311619703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:23.311878 kubelet[2727]: E1124 00:10:23.311805 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:23.311878 kubelet[2727]: E1124 00:10:23.311872 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:23.312364 kubelet[2727]: E1124 00:10:23.312267 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zw82p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:23.312591 containerd[1554]: time="2025-11-24T00:10:23.312336754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:10:23.314096 kubelet[2727]: E1124 00:10:23.313932 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:23.448410 containerd[1554]: time="2025-11-24T00:10:23.448350745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:23.449397 containerd[1554]: time="2025-11-24T00:10:23.449357723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:10:23.449564 containerd[1554]: time="2025-11-24T00:10:23.449398952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:10:23.449729 kubelet[2727]: E1124 00:10:23.449687 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:23.449823 kubelet[2727]: E1124 00:10:23.449743 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:23.449937 kubelet[2727]: E1124 00:10:23.449868 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:23.451565 kubelet[2727]: E1124 00:10:23.451521 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:26.037711 kubelet[2727]: E1124 00:10:26.037558 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:10:28.035578 kubelet[2727]: E1124 00:10:28.035410 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:33.037633 kubelet[2727]: E1124 00:10:33.036402 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:33.037633 kubelet[2727]: E1124 00:10:33.037533 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:34.036881 kubelet[2727]: E1124 00:10:34.036825 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:38.035898 kubelet[2727]: E1124 00:10:38.035842 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:39.039076 containerd[1554]: time="2025-11-24T00:10:39.038494307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:39.165974 containerd[1554]: time="2025-11-24T00:10:39.165927265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:39.166966 containerd[1554]: time="2025-11-24T00:10:39.166935038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:39.167061 containerd[1554]: time="2025-11-24T00:10:39.166978558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:39.167176 kubelet[2727]: E1124 00:10:39.167145 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:39.167667 kubelet[2727]: E1124 00:10:39.167199 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:39.167667 kubelet[2727]: E1124 00:10:39.167324 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktsbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:39.168552 kubelet[2727]: E1124 00:10:39.168500 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:41.037007 containerd[1554]: time="2025-11-24T00:10:41.036721720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:10:41.169040 containerd[1554]: time="2025-11-24T00:10:41.168992874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:41.170306 containerd[1554]: time="2025-11-24T00:10:41.170230536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:10:41.170504 containerd[1554]: time="2025-11-24T00:10:41.170295456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:10:41.170852 kubelet[2727]: E1124 00:10:41.170805 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:41.172023 kubelet[2727]: E1124 00:10:41.170977 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:41.172023 kubelet[2727]: E1124 00:10:41.171644 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29e504daf9ac4811b8d5b3cd1c6c6483,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:41.174452 containerd[1554]: time="2025-11-24T00:10:41.174334823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:10:41.313690 containerd[1554]: time="2025-11-24T00:10:41.313209648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:41.314929 containerd[1554]: time="2025-11-24T00:10:41.314825691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:10:41.314929 containerd[1554]: time="2025-11-24T00:10:41.314903581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:41.315169 kubelet[2727]: E1124 00:10:41.315118 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:41.315298 kubelet[2727]: E1124 00:10:41.315274 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:41.315909 kubelet[2727]: E1124 00:10:41.315864 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:41.317455 kubelet[2727]: E1124 00:10:41.317414 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:10:45.040618 containerd[1554]: time="2025-11-24T00:10:45.040331335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:10:45.175906 containerd[1554]: time="2025-11-24T00:10:45.175829864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:45.177018 containerd[1554]: time="2025-11-24T00:10:45.176949175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:10:45.177378 containerd[1554]: time="2025-11-24T00:10:45.177351505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:10:45.178088 kubelet[2727]: E1124 00:10:45.177777 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:45.178088 kubelet[2727]: E1124 00:10:45.177844 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:45.178088 kubelet[2727]: E1124 00:10:45.178049 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:45.178602 containerd[1554]: time="2025-11-24T00:10:45.178297645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:10:45.320157 containerd[1554]: time="2025-11-24T00:10:45.319625797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:45.320919 containerd[1554]: time="2025-11-24T00:10:45.320873748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:10:45.321031 containerd[1554]: time="2025-11-24T00:10:45.320937108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:45.321064 kubelet[2727]: E1124 00:10:45.321037 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:45.321115 kubelet[2727]: E1124 00:10:45.321074 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:45.321302 kubelet[2727]: E1124 00:10:45.321246 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:45.322155 containerd[1554]: time="2025-11-24T00:10:45.322093338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:10:45.323118 kubelet[2727]: E1124 00:10:45.323058 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:10:45.447241 containerd[1554]: time="2025-11-24T00:10:45.447176551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:45.448246 containerd[1554]: time="2025-11-24T00:10:45.448165581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:10:45.448364 containerd[1554]: time="2025-11-24T00:10:45.448206151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:10:45.448924 kubelet[2727]: E1124 00:10:45.448587 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:45.448924 kubelet[2727]: E1124 00:10:45.448660 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:45.448924 kubelet[2727]: E1124 00:10:45.448838 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:45.451095 kubelet[2727]: E1124 00:10:45.451057 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:10:47.036973 containerd[1554]: time="2025-11-24T00:10:47.036709830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:10:47.170297 containerd[1554]: time="2025-11-24T00:10:47.170218216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:47.171539 containerd[1554]: time="2025-11-24T00:10:47.171393246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:10:47.171728 containerd[1554]: time="2025-11-24T00:10:47.171498056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:47.172124 kubelet[2727]: E1124 00:10:47.172060 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:47.173006 kubelet[2727]: E1124 00:10:47.172248 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:47.173006 kubelet[2727]: E1124 00:10:47.172819 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wvch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:47.173970 kubelet[2727]: E1124 00:10:47.173939 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:10:49.038435 containerd[1554]: time="2025-11-24T00:10:49.038376320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:49.040103 kubelet[2727]: E1124 00:10:49.039700 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:10:49.168424 containerd[1554]: time="2025-11-24T00:10:49.168326070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:49.170049 containerd[1554]: time="2025-11-24T00:10:49.169848079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:49.170242 containerd[1554]: time="2025-11-24T00:10:49.170015229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:49.170575 kubelet[2727]: E1124 00:10:49.170527 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:49.170747 kubelet[2727]: E1124 00:10:49.170676 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:49.171088 kubelet[2727]: E1124 00:10:49.171048 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zw82p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:49.172354 kubelet[2727]: E1124 00:10:49.172318 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:10:52.035544 kubelet[2727]: E1124 00:10:52.035380 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:10:52.037112 kubelet[2727]: E1124 00:10:52.037080 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:10:59.038148 kubelet[2727]: E1124 00:10:59.037869 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:11:00.034373 kubelet[2727]: E1124 00:11:00.034254 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:00.036535 kubelet[2727]: E1124 00:11:00.035662 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:11:00.039801 kubelet[2727]: E1124 00:11:00.039767 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:11:03.036570 kubelet[2727]: E1124 00:11:03.035923 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:11:05.035499 kubelet[2727]: E1124 00:11:05.034726 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:06.042539 kubelet[2727]: E1124 00:11:06.041554 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:11:07.042207 kubelet[2727]: E1124 00:11:07.041787 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:11:09.048120 kubelet[2727]: E1124 00:11:09.046431 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:11.038598 kubelet[2727]: E1124 00:11:11.038519 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:11:11.042644 kubelet[2727]: E1124 00:11:11.042593 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:11:12.034795 kubelet[2727]: E1124 00:11:12.034749 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:14.038224 kubelet[2727]: E1124 00:11:14.036478 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:11:14.038224 kubelet[2727]: E1124 00:11:14.036585 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:11:17.035324 kubelet[2727]: E1124 00:11:17.035285 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:18.037137 kubelet[2727]: E1124 00:11:18.036682 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:11:20.708328 systemd[1]: Started sshd@7-172.237.134.153:22-147.75.109.163:51196.service - OpenSSH per-connection server daemon (147.75.109.163:51196). Nov 24 00:11:21.035656 kubelet[2727]: E1124 00:11:21.035552 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:11:21.038704 sshd[4884]: Accepted publickey for core from 147.75.109.163 port 51196 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:21.042129 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:21.050677 systemd-logind[1530]: New session 8 of user core. Nov 24 00:11:21.055707 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:11:21.389500 sshd[4887]: Connection closed by 147.75.109.163 port 51196 Nov 24 00:11:21.391878 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:21.397553 systemd[1]: sshd@7-172.237.134.153:22-147.75.109.163:51196.service: Deactivated successfully. Nov 24 00:11:21.397954 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:11:21.403049 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:11:21.408294 systemd-logind[1530]: Removed session 8. Nov 24 00:11:24.034351 kubelet[2727]: E1124 00:11:24.034303 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:25.037701 kubelet[2727]: E1124 00:11:25.036982 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:11:25.040362 kubelet[2727]: E1124 00:11:25.040324 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:11:26.037523 containerd[1554]: time="2025-11-24T00:11:26.036880645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:11:26.178089 containerd[1554]: time="2025-11-24T00:11:26.177908499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:26.179117 containerd[1554]: time="2025-11-24T00:11:26.178985133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:11:26.179117 containerd[1554]: time="2025-11-24T00:11:26.179097792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:11:26.180501 kubelet[2727]: E1124 00:11:26.179299 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:11:26.180501 kubelet[2727]: E1124 00:11:26.179338 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:11:26.180501 kubelet[2727]: E1124 00:11:26.179474 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:26.181941 containerd[1554]: time="2025-11-24T00:11:26.181809207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:11:26.328455 containerd[1554]: time="2025-11-24T00:11:26.328278529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:26.330211 containerd[1554]: time="2025-11-24T00:11:26.329765571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:11:26.330211 containerd[1554]: time="2025-11-24T00:11:26.329842060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:11:26.330392 kubelet[2727]: E1124 00:11:26.330336 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:11:26.330453 kubelet[2727]: E1124 00:11:26.330409 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:11:26.330608 kubelet[2727]: E1124 00:11:26.330544 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r4dwf_calico-system(63922d09-5f16-43ef-bdc3-f819f707f5b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:26.331890 kubelet[2727]: E1124 00:11:26.331860 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:11:26.452739 systemd[1]: Started sshd@8-172.237.134.153:22-147.75.109.163:51200.service - OpenSSH per-connection server daemon (147.75.109.163:51200). Nov 24 00:11:26.798873 sshd[4901]: Accepted publickey for core from 147.75.109.163 port 51200 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:26.800348 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:26.805401 systemd-logind[1530]: New session 9 of user core. Nov 24 00:11:26.810592 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:11:27.119556 sshd[4904]: Connection closed by 147.75.109.163 port 51200 Nov 24 00:11:27.119968 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:27.128531 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:11:27.129351 systemd[1]: sshd@8-172.237.134.153:22-147.75.109.163:51200.service: Deactivated successfully. Nov 24 00:11:27.134389 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:11:27.138121 systemd-logind[1530]: Removed session 9. Nov 24 00:11:27.178848 systemd[1]: Started sshd@9-172.237.134.153:22-147.75.109.163:51210.service - OpenSSH per-connection server daemon (147.75.109.163:51210). Nov 24 00:11:27.499811 sshd[4917]: Accepted publickey for core from 147.75.109.163 port 51210 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:27.501589 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:27.507551 systemd-logind[1530]: New session 10 of user core. Nov 24 00:11:27.511596 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:11:27.830558 sshd[4922]: Connection closed by 147.75.109.163 port 51210 Nov 24 00:11:27.831086 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:27.836663 systemd[1]: sshd@9-172.237.134.153:22-147.75.109.163:51210.service: Deactivated successfully. Nov 24 00:11:27.841033 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:11:27.843084 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:11:27.846769 systemd-logind[1530]: Removed session 10. Nov 24 00:11:27.888656 systemd[1]: Started sshd@10-172.237.134.153:22-147.75.109.163:51218.service - OpenSSH per-connection server daemon (147.75.109.163:51218). Nov 24 00:11:28.034780 kubelet[2727]: E1124 00:11:28.034486 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:11:28.228624 sshd[4932]: Accepted publickey for core from 147.75.109.163 port 51218 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:28.230745 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:28.236319 systemd-logind[1530]: New session 11 of user core. Nov 24 00:11:28.243894 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:11:28.540533 sshd[4935]: Connection closed by 147.75.109.163 port 51218 Nov 24 00:11:28.541852 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:28.547664 systemd[1]: sshd@10-172.237.134.153:22-147.75.109.163:51218.service: Deactivated successfully. Nov 24 00:11:28.550574 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:11:28.551625 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:11:28.554265 systemd-logind[1530]: Removed session 11. Nov 24 00:11:33.041095 containerd[1554]: time="2025-11-24T00:11:33.040718447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:11:33.180799 containerd[1554]: time="2025-11-24T00:11:33.180744981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:33.181727 containerd[1554]: time="2025-11-24T00:11:33.181686245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:11:33.181799 containerd[1554]: time="2025-11-24T00:11:33.181767174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:11:33.182644 kubelet[2727]: E1124 00:11:33.182582 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:11:33.182644 kubelet[2727]: E1124 00:11:33.182646 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:11:33.183685 kubelet[2727]: E1124 00:11:33.182768 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktsbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-t4ct5_calico-apiserver(b92dcaad-cbde-40da-94a7-6e0bac08ac02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:33.184218 kubelet[2727]: E1124 00:11:33.184178 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:11:33.603447 systemd[1]: Started sshd@11-172.237.134.153:22-147.75.109.163:60940.service - OpenSSH per-connection server daemon (147.75.109.163:60940). Nov 24 00:11:33.931278 sshd[4973]: Accepted publickey for core from 147.75.109.163 port 60940 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:33.933811 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:33.943527 systemd-logind[1530]: New session 12 of user core. Nov 24 00:11:33.947599 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:11:34.259796 sshd[4976]: Connection closed by 147.75.109.163 port 60940 Nov 24 00:11:34.260663 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:34.266276 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:11:34.267115 systemd[1]: sshd@11-172.237.134.153:22-147.75.109.163:60940.service: Deactivated successfully. Nov 24 00:11:34.269675 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:11:34.273130 systemd-logind[1530]: Removed session 12. Nov 24 00:11:35.039854 containerd[1554]: time="2025-11-24T00:11:35.039819268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:11:35.174671 containerd[1554]: time="2025-11-24T00:11:35.174613428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:35.176014 containerd[1554]: time="2025-11-24T00:11:35.175811310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:11:35.176014 containerd[1554]: time="2025-11-24T00:11:35.175958159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:11:35.176330 kubelet[2727]: E1124 00:11:35.176271 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:11:35.176773 kubelet[2727]: E1124 00:11:35.176336 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:11:35.176773 kubelet[2727]: E1124 00:11:35.176477 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29e504daf9ac4811b8d5b3cd1c6c6483,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:35.179416 containerd[1554]: time="2025-11-24T00:11:35.179375858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:11:35.306788 containerd[1554]: time="2025-11-24T00:11:35.306375787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:35.307714 containerd[1554]: time="2025-11-24T00:11:35.307650309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:11:35.307810 containerd[1554]: time="2025-11-24T00:11:35.307680009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:11:35.308044 kubelet[2727]: E1124 00:11:35.307997 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:11:35.308123 kubelet[2727]: E1124 00:11:35.308054 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:11:35.308253 kubelet[2727]: E1124 00:11:35.308187 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5zqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-58b465668f-57pbc_calico-system(2a030f5f-0015-44b8-b116-a472da00a019): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:35.309973 kubelet[2727]: E1124 00:11:35.309888 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:11:37.035442 containerd[1554]: time="2025-11-24T00:11:37.035401820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:11:37.187184 containerd[1554]: time="2025-11-24T00:11:37.187120396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:37.188149 containerd[1554]: time="2025-11-24T00:11:37.188095619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:11:37.188245 containerd[1554]: time="2025-11-24T00:11:37.188181319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:11:37.188374 kubelet[2727]: E1124 00:11:37.188333 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:11:37.188727 kubelet[2727]: E1124 00:11:37.188387 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:11:37.188727 kubelet[2727]: E1124 00:11:37.188532 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rzlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bxlsg_calico-system(84972b9a-587c-4cc3-993d-8f4d81fe7493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:37.189968 kubelet[2727]: E1124 00:11:37.189940 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:11:38.034128 kubelet[2727]: E1124 00:11:38.034077 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:11:39.039627 containerd[1554]: time="2025-11-24T00:11:39.039567981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:11:39.041456 kubelet[2727]: E1124 00:11:39.041411 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:11:39.183228 containerd[1554]: time="2025-11-24T00:11:39.183004015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:39.185587 containerd[1554]: time="2025-11-24T00:11:39.184287087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:11:39.185691 containerd[1554]: time="2025-11-24T00:11:39.184348636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:11:39.185807 kubelet[2727]: E1124 00:11:39.185721 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:11:39.185910 kubelet[2727]: E1124 00:11:39.185830 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:11:39.186431 kubelet[2727]: E1124 00:11:39.186014 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wvch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-555b4c874c-4kfgm_calico-system(79029465-26e2-4032-b64d-59a7fac9f008): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:39.187784 kubelet[2727]: E1124 00:11:39.187740 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:11:39.325282 systemd[1]: Started sshd@12-172.237.134.153:22-147.75.109.163:60954.service - OpenSSH per-connection server daemon (147.75.109.163:60954). Nov 24 00:11:39.670698 sshd[4990]: Accepted publickey for core from 147.75.109.163 port 60954 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:39.674445 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:39.681214 systemd-logind[1530]: New session 13 of user core. Nov 24 00:11:39.685635 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:11:39.983022 sshd[4993]: Connection closed by 147.75.109.163 port 60954 Nov 24 00:11:39.983665 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:39.989275 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:11:39.990362 systemd[1]: sshd@12-172.237.134.153:22-147.75.109.163:60954.service: Deactivated successfully. Nov 24 00:11:39.993231 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:11:39.995669 systemd-logind[1530]: Removed session 13. Nov 24 00:11:40.039486 containerd[1554]: time="2025-11-24T00:11:40.039169600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:11:40.175256 containerd[1554]: time="2025-11-24T00:11:40.175049738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:11:40.176354 containerd[1554]: time="2025-11-24T00:11:40.176325519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:11:40.176587 containerd[1554]: time="2025-11-24T00:11:40.176355849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:11:40.176634 kubelet[2727]: E1124 00:11:40.176598 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:11:40.176908 kubelet[2727]: E1124 00:11:40.176646 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:11:40.176935 kubelet[2727]: E1124 00:11:40.176899 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zw82p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556645b45d-s8fp4_calico-apiserver(918b0245-1c27-4194-ac35-a7e394dba332): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:11:40.178285 kubelet[2727]: E1124 00:11:40.178156 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:11:45.044880 systemd[1]: Started sshd@13-172.237.134.153:22-147.75.109.163:55230.service - OpenSSH per-connection server daemon (147.75.109.163:55230). Nov 24 00:11:45.386123 sshd[5030]: Accepted publickey for core from 147.75.109.163 port 55230 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:45.389693 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:45.399221 systemd-logind[1530]: New session 14 of user core. Nov 24 00:11:45.403679 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:11:45.694710 sshd[5033]: Connection closed by 147.75.109.163 port 55230 Nov 24 00:11:45.697652 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:45.702586 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:11:45.704728 systemd[1]: sshd@13-172.237.134.153:22-147.75.109.163:55230.service: Deactivated successfully. Nov 24 00:11:45.708138 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:11:45.710398 systemd-logind[1530]: Removed session 14. Nov 24 00:11:45.756321 systemd[1]: Started sshd@14-172.237.134.153:22-147.75.109.163:55244.service - OpenSSH per-connection server daemon (147.75.109.163:55244). Nov 24 00:11:46.096276 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 55244 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:46.097975 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:46.103934 systemd-logind[1530]: New session 15 of user core. Nov 24 00:11:46.108592 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:11:46.777439 sshd[5048]: Connection closed by 147.75.109.163 port 55244 Nov 24 00:11:46.778212 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:46.782560 systemd[1]: sshd@14-172.237.134.153:22-147.75.109.163:55244.service: Deactivated successfully. Nov 24 00:11:46.785959 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:11:46.789984 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:11:46.791941 systemd-logind[1530]: Removed session 15. Nov 24 00:11:46.835651 systemd[1]: Started sshd@15-172.237.134.153:22-147.75.109.163:55258.service - OpenSSH per-connection server daemon (147.75.109.163:55258). Nov 24 00:11:47.039131 kubelet[2727]: E1124 00:11:47.038598 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:11:47.170012 sshd[5058]: Accepted publickey for core from 147.75.109.163 port 55258 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:47.172064 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:47.182197 systemd-logind[1530]: New session 16 of user core. Nov 24 00:11:47.190059 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:11:47.969474 sshd[5061]: Connection closed by 147.75.109.163 port 55258 Nov 24 00:11:47.970229 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:47.975888 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:11:47.976876 systemd[1]: sshd@15-172.237.134.153:22-147.75.109.163:55258.service: Deactivated successfully. Nov 24 00:11:47.980539 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:11:47.982608 systemd-logind[1530]: Removed session 16. Nov 24 00:11:48.031803 systemd[1]: Started sshd@16-172.237.134.153:22-147.75.109.163:55274.service - OpenSSH per-connection server daemon (147.75.109.163:55274). Nov 24 00:11:48.034637 kubelet[2727]: E1124 00:11:48.034443 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493" Nov 24 00:11:48.374872 sshd[5080]: Accepted publickey for core from 147.75.109.163 port 55274 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:48.376107 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:48.385530 systemd-logind[1530]: New session 17 of user core. Nov 24 00:11:48.389606 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:11:48.848393 sshd[5083]: Connection closed by 147.75.109.163 port 55274 Nov 24 00:11:48.849661 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:48.857444 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:11:48.858025 systemd[1]: sshd@16-172.237.134.153:22-147.75.109.163:55274.service: Deactivated successfully. Nov 24 00:11:48.862384 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:11:48.868246 systemd-logind[1530]: Removed session 17. Nov 24 00:11:48.910668 systemd[1]: Started sshd@17-172.237.134.153:22-147.75.109.163:55280.service - OpenSSH per-connection server daemon (147.75.109.163:55280). Nov 24 00:11:49.260384 sshd[5093]: Accepted publickey for core from 147.75.109.163 port 55280 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:49.261072 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:49.267860 systemd-logind[1530]: New session 18 of user core. Nov 24 00:11:49.276674 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:11:49.563212 sshd[5096]: Connection closed by 147.75.109.163 port 55280 Nov 24 00:11:49.563829 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:49.568949 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:11:49.569531 systemd[1]: sshd@17-172.237.134.153:22-147.75.109.163:55280.service: Deactivated successfully. Nov 24 00:11:49.576822 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:11:49.582352 systemd-logind[1530]: Removed session 18. Nov 24 00:11:50.037060 kubelet[2727]: E1124 00:11:50.036993 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-58b465668f-57pbc" podUID="2a030f5f-0015-44b8-b116-a472da00a019" Nov 24 00:11:51.035410 kubelet[2727]: E1124 00:11:51.035286 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-555b4c874c-4kfgm" podUID="79029465-26e2-4032-b64d-59a7fac9f008" Nov 24 00:11:53.037500 kubelet[2727]: E1124 00:11:53.037156 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r4dwf" podUID="63922d09-5f16-43ef-bdc3-f819f707f5b0" Nov 24 00:11:54.626912 systemd[1]: Started sshd@18-172.237.134.153:22-147.75.109.163:35376.service - OpenSSH per-connection server daemon (147.75.109.163:35376). Nov 24 00:11:54.952385 sshd[5110]: Accepted publickey for core from 147.75.109.163 port 35376 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:11:54.955160 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:11:54.966833 systemd-logind[1530]: New session 19 of user core. Nov 24 00:11:54.972593 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:11:55.037474 kubelet[2727]: E1124 00:11:55.037122 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-s8fp4" podUID="918b0245-1c27-4194-ac35-a7e394dba332" Nov 24 00:11:55.257543 sshd[5113]: Connection closed by 147.75.109.163 port 35376 Nov 24 00:11:55.259323 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:55.266182 systemd[1]: sshd@18-172.237.134.153:22-147.75.109.163:35376.service: Deactivated successfully. Nov 24 00:11:55.270799 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:11:55.272696 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:11:55.275046 systemd-logind[1530]: Removed session 19. Nov 24 00:11:57.035409 kubelet[2727]: E1124 00:11:57.035365 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 24 00:12:00.322803 systemd[1]: Started sshd@19-172.237.134.153:22-147.75.109.163:35392.service - OpenSSH per-connection server daemon (147.75.109.163:35392). Nov 24 00:12:00.670487 sshd[5125]: Accepted publickey for core from 147.75.109.163 port 35392 ssh2: RSA SHA256:Pchp35vWTs9Zdpru8qSkjaQDXNtKYOQijK11UPKUQY8 Nov 24 00:12:00.673079 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:00.678913 systemd-logind[1530]: New session 20 of user core. Nov 24 00:12:00.689626 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:12:00.967221 sshd[5128]: Connection closed by 147.75.109.163 port 35392 Nov 24 00:12:00.967674 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:00.975071 systemd[1]: sshd@19-172.237.134.153:22-147.75.109.163:35392.service: Deactivated successfully. Nov 24 00:12:00.977606 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:12:00.978445 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:12:00.979800 systemd-logind[1530]: Removed session 20. Nov 24 00:12:02.035881 kubelet[2727]: E1124 00:12:02.035837 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556645b45d-t4ct5" podUID="b92dcaad-cbde-40da-94a7-6e0bac08ac02" Nov 24 00:12:02.040185 kubelet[2727]: E1124 00:12:02.036147 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bxlsg" podUID="84972b9a-587c-4cc3-993d-8f4d81fe7493"